Nov 24 16:51:57 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 16:51:57 crc restorecon[4700]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:57 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 16:51:58 crc restorecon[4700]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 16:51:59 crc kubenswrapper[4768]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.274051 4768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279605 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279637 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279646 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279656 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279667 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279678 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279688 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279696 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279704 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279715 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279726 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279736 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279744 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279752 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279760 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279768 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279776 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279784 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279792 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279800 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279808 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279839 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279848 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279858 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279868 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279876 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279884 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279892 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279900 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279907 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279915 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279924 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279932 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279941 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279949 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279959 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279969 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279981 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.279991 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280000 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280009 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280018 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280027 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280035 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280045 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280054 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280065 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280076 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280085 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280093 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280101 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280109 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280117 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280126 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280134 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280142 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280150 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280158 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280166 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280174 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280182 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280189 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280197 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280207 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280214 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280222 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280230 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280238 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280246 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280254 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.280262 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.281140 4768 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.281165 4768 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.283958 4768 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.283974 4768 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.283992 4768 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284004 4768 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284016 4768 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284027 4768 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284038 4768 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284048 4768 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284058 4768 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284068 4768 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284077 4768 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284087 4768 flags.go:64] FLAG: --cgroup-root="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284095 4768 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284104 4768 flags.go:64] FLAG: --client-ca-file="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284113 4768 flags.go:64] FLAG: --cloud-config="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284122 4768 flags.go:64] FLAG: --cloud-provider="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284131 4768 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284142 4768 flags.go:64] FLAG: --cluster-domain="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284151 4768 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284160 4768 flags.go:64] FLAG: --config-dir="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284170 4768 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284182 4768 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284195 4768 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284205 4768 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284215 4768 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284226 4768 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284236 4768 flags.go:64] FLAG: --contention-profiling="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284245 4768 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284254 4768 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284264 4768 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284273 4768 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284284 4768 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284294 4768 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284303 4768 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284312 4768 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284321 4768 flags.go:64] FLAG: --enable-server="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284329 4768 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284341 4768 flags.go:64] FLAG: --event-burst="100" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284379 4768 flags.go:64] FLAG: --event-qps="50" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284389 4768 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284398 4768 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284407 4768 flags.go:64] FLAG: --eviction-hard="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284418 4768 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284427 4768 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284436 4768 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284445 4768 flags.go:64] FLAG: --eviction-soft="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284454 4768 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284464 4768 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284474 4768 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284483 4768 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284492 4768 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284501 4768 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284510 4768 flags.go:64] FLAG: --feature-gates="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284521 4768 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284530 4768 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284539 4768 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284550 4768 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284559 4768 flags.go:64] FLAG: --healthz-port="10248" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284569 4768 flags.go:64] FLAG: --help="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284578 4768 flags.go:64] FLAG: --hostname-override="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284587 4768 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284597 4768 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284606 4768 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284615 4768 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284625 4768 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284634 4768 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284643 4768 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284652 4768 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284660 4768 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284670 4768 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284680 4768 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284688 4768 flags.go:64] FLAG: --kube-reserved="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284697 4768 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284706 4768 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284723 4768 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284732 4768 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284741 4768 flags.go:64] FLAG: --lock-file="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284752 4768 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284762 4768 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284772 4768 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284786 4768 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284795 4768 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284803 4768 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284812 4768 flags.go:64] FLAG: --logging-format="text" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284821 4768 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284831 4768 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284840 4768 flags.go:64] FLAG: --manifest-url="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284849 4768 flags.go:64] FLAG: --manifest-url-header="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284862 4768 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284871 4768 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284882 4768 flags.go:64] FLAG: --max-pods="110" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284891 4768 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284900 4768 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284908 4768 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284917 4768 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284926 4768 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284935 4768 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284945 4768 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284964 4768 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284973 4768 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284982 4768 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.284992 4768 flags.go:64] FLAG: --pod-cidr="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285000 4768 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285015 4768 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285024 4768 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285034 4768 flags.go:64] FLAG: --pods-per-core="0" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285043 4768 flags.go:64] FLAG: --port="10250" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285052 4768 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285061 4768 flags.go:64] FLAG: --provider-id="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285070 4768 flags.go:64] FLAG: --qos-reserved="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285081 4768 flags.go:64] FLAG: --read-only-port="10255" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285090 4768 flags.go:64] FLAG: --register-node="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285099 4768 flags.go:64] FLAG: --register-schedulable="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285108 4768 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285123 4768 flags.go:64] FLAG: --registry-burst="10" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285133 4768 flags.go:64] FLAG: --registry-qps="5" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285142 4768 flags.go:64] FLAG: --reserved-cpus="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285151 4768 flags.go:64] FLAG: --reserved-memory="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285162 4768 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285172 4768 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285183 4768 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285195 4768 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285206 4768 flags.go:64] FLAG: --runonce="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285215 4768 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285225 4768 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285236 4768 flags.go:64] FLAG: --seccomp-default="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285245 4768 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285255 4768 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285264 4768 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285273 4768 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285282 4768 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285293 4768 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285301 4768 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285310 4768 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285319 4768 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285328 4768 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285338 4768 flags.go:64] FLAG: --system-cgroups="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285351 4768 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285389 4768 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285398 4768 flags.go:64] FLAG: --tls-cert-file="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285408 4768 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285424 4768 flags.go:64] FLAG: --tls-min-version="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285436 4768 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285447 4768 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285458 4768 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285469 4768 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285483 4768 flags.go:64] FLAG: --v="2" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285499 4768 flags.go:64] FLAG: --version="false" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285514 4768 flags.go:64] FLAG: --vmodule="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285528 4768 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.285540 4768 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285762 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285773 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285783 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285792 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285801 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285809 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285818 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285827 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285835 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285843 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285850 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285859 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285867 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285874 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285882 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285890 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285897 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285905 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285913 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285921 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285928 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285936 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285946 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285956 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285966 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285977 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285986 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.285995 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286004 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286012 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286022 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286033 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286042 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286051 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286059 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286068 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286076 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286084 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286094 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286102 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286110 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286118 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286126 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286135 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286144 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286152 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286160 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286169 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286178 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286186 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286195 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286206 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286216 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286226 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286236 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286244 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286253 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286262 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286271 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286279 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286287 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286295 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286303 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286311 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286319 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286327 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286335 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286351 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286383 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286392 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.286400 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.286424 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.299666 4768 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.299714 4768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299872 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299889 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299900 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299911 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299924 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299939 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299949 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299958 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299968 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299977 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299986 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.299995 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300004 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300013 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300021 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300029 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300037 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300045 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300053 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300062 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300070 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300078 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300086 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300094 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300102 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300110 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300120 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300129 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300138 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300149 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300162 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300174 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300183 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300191 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300201 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300209 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300217 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300225 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300233 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300242 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300250 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300258 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300266 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300275 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300283 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300291 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300299 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300307 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300315 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300323 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300332 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300340 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300348 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300392 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300403 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300436 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300445 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300457 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300468 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300477 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300485 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300493 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300501 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300509 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300516 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300524 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300532 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300539 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300547 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300558 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300569 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.300583 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300815 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300830 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300841 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300855 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300864 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300873 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300880 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300888 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300896 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300906 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300914 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300922 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300929 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300937 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300945 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300955 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300967 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300976 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300986 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.300997 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301006 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301017 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301025 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301033 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301041 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301049 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301059 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301069 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301078 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301087 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301095 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301104 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301112 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301121 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301131 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301140 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301148 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301156 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301182 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301192 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301201 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301208 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301216 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301224 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301231 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301240 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301247 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301255 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301263 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301271 4768 feature_gate.go:330] unrecognized feature gate: Example Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301280 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301288 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301296 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301303 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301311 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301319 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301327 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301335 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301342 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301352 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301389 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301400 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301409 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301417 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301425 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301433 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301441 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301449 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301457 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301465 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.301475 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.301487 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.301722 4768 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.307592 4768 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.307732 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.310556 4768 server.go:997] "Starting client certificate rotation" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.310610 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.312310 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-21 05:27:01.038023917 +0000 UTC Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.312529 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 636h35m1.725500673s for next certificate rotation Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.348310 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.355917 4768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.379922 4768 log.go:25] "Validated CRI v1 runtime API" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.422269 4768 log.go:25] "Validated CRI v1 image API" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.424870 4768 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.434732 4768 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-16-47-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.434780 4768 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.463474 4768 manager.go:217] Machine: {Timestamp:2025-11-24 16:51:59.459413435 +0000 UTC m=+0.706382164 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7d12c74d-4c3d-45cf-9517-ea4f468abd63 BootID:397c5980-9223-44c8-a77d-6f192e744f3c Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:20:e7:56 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:20:e7:56 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:55:fb:21 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3d:d6:5d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e0:2d:9e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:97:ae:93 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:17:8b:c4:b9:77 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:26:6b:6a:b9:1a:f1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.463895 4768 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.464087 4768 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.466783 4768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.467096 4768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.467154 4768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.467525 4768 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.467544 4768 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.468264 4768 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.468321 4768 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.469470 4768 state_mem.go:36] "Initialized new in-memory state store" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.469630 4768 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.477170 4768 kubelet.go:418] "Attempting to sync node with API server" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.477283 4768 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.477432 4768 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.477506 4768 kubelet.go:324] "Adding apiserver pod source" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.477539 4768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.485512 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.485749 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.485502 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.485835 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.488032 4768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.489557 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.491541 4768 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494521 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494571 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494589 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494605 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494635 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494652 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494669 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494693 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494711 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494726 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494760 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.494780 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.496099 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.496954 4768 server.go:1280] "Started kubelet" Nov 24 16:51:59 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.500964 4768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.498208 4768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.502564 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.502740 4768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.504926 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.504988 4768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.505228 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.505425 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:26:35.189091868 +0000 UTC Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.505626 4768 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.505686 4768 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.505809 4768 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.506177 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="200ms" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.506846 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.507019 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.511471 4768 factory.go:55] Registering systemd factory Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.511506 4768 factory.go:221] Registration of the systemd container factory successfully Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.516454 4768 server.go:460] "Adding debug handlers to kubelet server" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.518745 4768 factory.go:153] Registering CRI-O factory Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.518789 4768 factory.go:221] Registration of the crio container factory successfully Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.518942 4768 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.519096 4768 factory.go:103] Registering Raw factory Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.519128 4768 manager.go:1196] Started watching for new ooms in manager Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.517322 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.58:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187aff7db84b100d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 16:51:59.496892429 +0000 UTC m=+0.743861127,LastTimestamp:2025-11-24 16:51:59.496892429 +0000 UTC m=+0.743861127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.520858 4768 manager.go:319] Starting recovery of all containers Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526072 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526217 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526302 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526407 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526492 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526570 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526670 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526749 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526833 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526912 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.526985 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527174 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527258 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527335 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527437 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527522 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527616 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527725 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527806 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527884 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.527961 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528047 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528133 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528211 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528291 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528401 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528621 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528715 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.528797 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529516 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529586 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529622 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529648 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529674 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529700 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529725 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529752 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529777 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529803 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529828 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529853 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529876 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529899 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529921 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.529948 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530001 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530025 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530060 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530084 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530117 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530142 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530165 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530403 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530433 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530460 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530486 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530514 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530540 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530563 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530588 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530611 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530634 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530660 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530685 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530707 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530733 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530755 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530778 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530805 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530829 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530850 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530877 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530897 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530919 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.530942 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531036 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531062 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531087 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531111 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531138 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531161 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531184 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531206 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531229 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531252 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531280 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531305 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531327 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531353 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531410 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531434 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531643 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531667 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531688 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531713 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531737 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531760 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531784 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531808 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531833 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531866 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531890 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531914 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531937 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.531978 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532004 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532029 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532053 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532077 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532102 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532130 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532156 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532182 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532248 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532271 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532295 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532319 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532343 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532391 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532417 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532439 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532465 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532487 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532509 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532532 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532557 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532622 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532647 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532669 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532689 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532710 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532734 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532757 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532778 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532800 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532821 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532842 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532864 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532886 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532907 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532928 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532952 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.532977 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533000 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533022 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533044 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533067 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533090 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533114 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533137 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533159 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533181 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533203 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533227 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533249 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533271 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533292 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533316 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533338 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533396 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533418 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533477 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533503 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533526 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.533550 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537115 4768 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537202 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537233 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537253 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537762 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537795 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537817 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537834 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537853 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537868 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537888 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537906 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537921 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537935 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537950 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537969 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537983 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.537997 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538011 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538026 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538070 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538088 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538106 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538125 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538140 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538155 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538172 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538188 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538479 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538504 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538528 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538543 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538558 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538573 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538589 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538604 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538620 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538634 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538649 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538665 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538681 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538698 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538716 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538730 4768 reconstruct.go:97] "Volume reconstruction finished" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.538741 4768 reconciler.go:26] "Reconciler: start to sync state" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.551697 4768 manager.go:324] Recovery completed Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.567186 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.569639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.569689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.569707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.570713 4768 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.570731 4768 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.570754 4768 state_mem.go:36] "Initialized new in-memory state store" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.576221 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.579435 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.579481 4768 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.579515 4768 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.579560 4768 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 16:51:59 crc kubenswrapper[4768]: W1124 16:51:59.580444 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.580500 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.592885 4768 policy_none.go:49] "None policy: Start" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.594016 4768 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.594051 4768 state_mem.go:35] "Initializing new in-memory state store" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.605509 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.651028 4768 manager.go:334] "Starting Device Plugin manager" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.651158 4768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.651185 4768 server.go:79] "Starting device plugin registration server" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.652262 4768 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.652379 4768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.652560 4768 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.652731 4768 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.652744 4768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.671162 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.679673 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.679788 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.681484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.681554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.681575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.681897 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.682273 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.682413 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683506 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683750 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683828 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.683958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684721 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684894 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.684944 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.685771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.685808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.685827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686009 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686337 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.686400 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687547 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.687600 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.688639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.688676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.688694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.707024 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="400ms" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745415 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745508 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745638 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745687 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745727 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745773 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.745982 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.746066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.746204 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.746282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.746413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.746453 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.753420 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.755776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.755836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.755856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.755894 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.756779 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848107 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848388 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848664 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848781 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848602 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848844 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.848990 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849248 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849314 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849424 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849489 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849549 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849585 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849670 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849759 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.849998 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.957016 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.958417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.958536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.958614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:51:59 crc kubenswrapper[4768]: I1124 16:51:59.958706 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:51:59 crc kubenswrapper[4768]: E1124 16:51:59.959330 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.015866 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.023246 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.042493 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.063297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.073213 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.077305 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c67f0076dd015f37cc2f81137b09ccbefa8d0c9e5135c911998471f75dece24b WatchSource:0}: Error finding container c67f0076dd015f37cc2f81137b09ccbefa8d0c9e5135c911998471f75dece24b: Status 404 returned error can't find the container with id c67f0076dd015f37cc2f81137b09ccbefa8d0c9e5135c911998471f75dece24b Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.079251 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3cb92dc2b73fd2bbb7fb6389254adf91d0f59ae8f9de14c022c3fed704d4ace4 WatchSource:0}: Error finding container 3cb92dc2b73fd2bbb7fb6389254adf91d0f59ae8f9de14c022c3fed704d4ace4: Status 404 returned error can't find the container with id 3cb92dc2b73fd2bbb7fb6389254adf91d0f59ae8f9de14c022c3fed704d4ace4 Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.091098 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-534e7c03fffcaece35f750521bb3b8323ebae939d8ed0c73ccc87c9710b8bb86 WatchSource:0}: Error finding container 534e7c03fffcaece35f750521bb3b8323ebae939d8ed0c73ccc87c9710b8bb86: Status 404 returned error can't find the container with id 534e7c03fffcaece35f750521bb3b8323ebae939d8ed0c73ccc87c9710b8bb86 Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.097429 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-966faaeb4451e11c498814f2fd7cf0b8fff7c76cd948d51a12406b09cc9cf3bd WatchSource:0}: Error finding container 966faaeb4451e11c498814f2fd7cf0b8fff7c76cd948d51a12406b09cc9cf3bd: Status 404 returned error can't find the container with id 966faaeb4451e11c498814f2fd7cf0b8fff7c76cd948d51a12406b09cc9cf3bd Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.100083 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1e4872577112730fb804a1fdc7852f1701695dfadb5177ffcb339dfca8f78fbf WatchSource:0}: Error finding container 1e4872577112730fb804a1fdc7852f1701695dfadb5177ffcb339dfca8f78fbf: Status 404 returned error can't find the container with id 1e4872577112730fb804a1fdc7852f1701695dfadb5177ffcb339dfca8f78fbf Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.108166 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="800ms" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.360403 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.362256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.362333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.362395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.362457 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.363222 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.504092 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.506163 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:12:55.661618522 +0000 UTC Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.565800 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.565939 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.585219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e4872577112730fb804a1fdc7852f1701695dfadb5177ffcb339dfca8f78fbf"} Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.586397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"966faaeb4451e11c498814f2fd7cf0b8fff7c76cd948d51a12406b09cc9cf3bd"} Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.587850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"534e7c03fffcaece35f750521bb3b8323ebae939d8ed0c73ccc87c9710b8bb86"} Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.589054 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cb92dc2b73fd2bbb7fb6389254adf91d0f59ae8f9de14c022c3fed704d4ace4"} Nov 24 16:52:00 crc kubenswrapper[4768]: I1124 16:52:00.590399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c67f0076dd015f37cc2f81137b09ccbefa8d0c9e5135c911998471f75dece24b"} Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.710839 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.710960 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:00 crc kubenswrapper[4768]: W1124 16:52:00.906899 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.907596 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:00 crc kubenswrapper[4768]: E1124 16:52:00.909533 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="1.6s" Nov 24 16:52:01 crc kubenswrapper[4768]: W1124 16:52:01.016293 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:01 crc kubenswrapper[4768]: E1124 16:52:01.016459 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.163815 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.167656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.167722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.167740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.167780 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:52:01 crc kubenswrapper[4768]: E1124 16:52:01.168552 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.503855 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.507242 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:26:40.560724174 +0000 UTC Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.507373 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 694h34m39.053354695s for next certificate rotation Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.596160 4768 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1" exitCode=0 Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.596251 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.596283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.597207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.597242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.597255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.599455 4768 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657" exitCode=0 Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.599571 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.599615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.600523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.600554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.600567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.603260 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.603310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.603338 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.605904 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2" exitCode=0 Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.605999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.606153 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.608083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.608138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.608160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.608940 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515" exitCode=0 Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.608989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515"} Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.609088 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.610388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.610432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.610452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.612822 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.614796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.614822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:01 crc kubenswrapper[4768]: I1124 16:52:01.614834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: W1124 16:52:02.314591 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:02 crc kubenswrapper[4768]: E1124 16:52:02.314711 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.504406 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:02 crc kubenswrapper[4768]: E1124 16:52:02.510946 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.58:6443: connect: connection refused" interval="3.2s" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.618055 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.618112 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.618128 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.618179 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.620238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.620310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.620326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.621588 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.621742 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.623416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.623447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.623458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.626636 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.626670 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.626685 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.626701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.628648 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4" exitCode=0 Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.628718 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.628767 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.629664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.629699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.629712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.631436 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"49cc8d1811c588c8c1f29240c5ecb01aa846858f1f56f9d6ee795d43da15aff0"} Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.631509 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.632565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.632604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.632616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.735746 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.768869 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.773155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.773212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.773228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:02 crc kubenswrapper[4768]: I1124 16:52:02.773266 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:52:02 crc kubenswrapper[4768]: E1124 16:52:02.773913 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.58:6443: connect: connection refused" node="crc" Nov 24 16:52:02 crc kubenswrapper[4768]: W1124 16:52:02.930014 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:02 crc kubenswrapper[4768]: E1124 16:52:02.930114 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.503223 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.638453 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3"} Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.638594 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.639917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.639968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.639986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.643899 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac" exitCode=0 Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.644026 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.644880 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.644960 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645258 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac"} Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645454 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.645957 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.646527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.647533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.647561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:03 crc kubenswrapper[4768]: I1124 16:52:03.647571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:03 crc kubenswrapper[4768]: W1124 16:52:03.788034 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:03 crc kubenswrapper[4768]: E1124 16:52:03.788133 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:03 crc kubenswrapper[4768]: W1124 16:52:03.896938 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.58:6443: connect: connection refused Nov 24 16:52:03 crc kubenswrapper[4768]: E1124 16:52:03.897031 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.58:6443: connect: connection refused" logger="UnhandledError" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.028444 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.653924 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.656196 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3" exitCode=255 Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.656266 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3"} Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.656457 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.658599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.658678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.658695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.659681 4768 scope.go:117] "RemoveContainer" containerID="3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.661044 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae"} Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.661102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74"} Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.661332 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.661757 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.662238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.662302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.662327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.663006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.663042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:04 crc kubenswrapper[4768]: I1124 16:52:04.663054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.667117 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.669971 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230"} Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.670255 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.670328 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.671356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.671386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.671398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675025 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac"} Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675075 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248"} Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9"} Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675196 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.675987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.974955 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.976762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.976830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.976855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:05 crc kubenswrapper[4768]: I1124 16:52:05.976900 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.678229 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.678313 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.678437 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:06 crc kubenswrapper[4768]: I1124 16:52:06.679987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.111994 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.599602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.681063 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.682829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.682889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:07 crc kubenswrapper[4768]: I1124 16:52:07.682908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.454140 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.580385 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.580543 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.581470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.581498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.581506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.683641 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.685188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.685226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.685235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.884169 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.884506 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.885856 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.885885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:08 crc kubenswrapper[4768]: I1124 16:52:08.885893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.004033 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.004264 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.005556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.005579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.005588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:09 crc kubenswrapper[4768]: E1124 16:52:09.671326 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.686375 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.687566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.687611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:09 crc kubenswrapper[4768]: I1124 16:52:09.687624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:11 crc kubenswrapper[4768]: I1124 16:52:11.581469 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 16:52:11 crc kubenswrapper[4768]: I1124 16:52:11.581563 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.580109 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.580391 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.581954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.581993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.582006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.590777 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.695800 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.696758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.696842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.696872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.702742 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.788455 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.788718 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.790644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.790795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:12 crc kubenswrapper[4768]: I1124 16:52:12.790826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:13 crc kubenswrapper[4768]: I1124 16:52:13.698176 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:13 crc kubenswrapper[4768]: I1124 16:52:13.699036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:13 crc kubenswrapper[4768]: I1124 16:52:13.699090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:13 crc kubenswrapper[4768]: I1124 16:52:13.699105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:14 crc kubenswrapper[4768]: I1124 16:52:14.504577 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 16:52:14 crc kubenswrapper[4768]: I1124 16:52:14.981380 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 16:52:14 crc kubenswrapper[4768]: I1124 16:52:14.981460 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 16:52:14 crc kubenswrapper[4768]: I1124 16:52:14.987923 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 16:52:14 crc kubenswrapper[4768]: I1124 16:52:14.988019 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 16:52:17 crc kubenswrapper[4768]: I1124 16:52:17.112752 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 16:52:17 crc kubenswrapper[4768]: I1124 16:52:17.113126 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.143927 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.144021 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.460474 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.460750 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.461174 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.461243 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.462496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.462549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.462563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.466619 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.709792 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.710418 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.710518 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.710891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.710919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:18 crc kubenswrapper[4768]: I1124 16:52:18.710930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:19 crc kubenswrapper[4768]: E1124 16:52:19.671654 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 16:52:19 crc kubenswrapper[4768]: E1124 16:52:19.976409 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.978004 4768 trace.go:236] Trace[1114111832]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 16:52:06.487) (total time: 13490ms): Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1114111832]: ---"Objects listed" error: 13490ms (16:52:19.977) Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1114111832]: [13.490928292s] [13.490928292s] END Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.978039 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.979621 4768 trace.go:236] Trace[945827182]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 16:52:05.892) (total time: 14087ms): Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[945827182]: ---"Objects listed" error: 14087ms (16:52:19.979) Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[945827182]: [14.087048894s] [14.087048894s] END Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.979650 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.980150 4768 trace.go:236] Trace[1183401484]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 16:52:09.181) (total time: 10798ms): Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1183401484]: ---"Objects listed" error: 10798ms (16:52:19.980) Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1183401484]: [10.798849996s] [10.798849996s] END Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.980194 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.981151 4768 trace.go:236] Trace[1060794719]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 16:52:08.039) (total time: 11941ms): Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1060794719]: ---"Objects listed" error: 11941ms (16:52:19.980) Nov 24 16:52:19 crc kubenswrapper[4768]: Trace[1060794719]: [11.941412802s] [11.941412802s] END Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.981189 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 16:52:19 crc kubenswrapper[4768]: I1124 16:52:19.981994 4768 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 16:52:19 crc kubenswrapper[4768]: E1124 16:52:19.983042 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.489528 4768 apiserver.go:52] "Watching apiserver" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.495281 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.495562 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.495946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.495993 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.496020 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.496244 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.496636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.497106 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.497900 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.497921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.497977 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.499802 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.499910 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.500028 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.500054 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.500110 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.501226 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.501499 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.501696 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.501784 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.506842 4768 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.537671 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.548317 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.557266 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.564932 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.572310 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.580343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587582 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587617 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587645 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587670 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587722 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587737 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587753 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587768 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587862 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587895 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587917 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587937 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.587961 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588018 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588040 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588146 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588170 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588203 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588237 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588256 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588229 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588321 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588388 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588404 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588524 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588679 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588810 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589153 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589158 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.588423 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589335 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589357 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589374 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589412 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589444 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589469 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589487 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589507 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589567 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590476 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590555 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590602 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590650 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590673 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590695 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590715 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590740 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590786 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590802 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590854 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590879 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590903 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590961 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590995 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591026 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591042 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591058 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591075 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591092 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591108 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591162 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591180 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591245 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591298 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591314 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591330 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591347 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591432 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591449 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591480 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591498 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591516 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591533 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591639 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591658 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591673 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591690 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591722 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591739 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591759 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591782 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591900 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591957 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592008 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592051 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592073 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592096 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592120 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592144 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592182 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592215 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592232 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592250 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592287 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592324 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592341 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592405 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592430 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592448 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592467 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592485 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592503 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592520 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592541 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592594 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592998 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593082 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593106 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593142 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596881 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596996 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597035 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597108 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597144 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597179 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597209 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597242 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597288 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589334 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589884 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.589972 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590187 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597593 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590330 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590328 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590883 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597713 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591156 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591651 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.591751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.592295 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593283 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.593960 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594311 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594444 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594586 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598145 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598293 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594840 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.594878 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595070 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595202 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595456 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.595816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596010 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596129 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596167 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596223 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.596368 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597336 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.597403 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.590969 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598646 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598943 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599448 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599609 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599947 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.599975 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600063 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600186 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600194 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600658 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.600661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.601158 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.601174 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.601260 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.601287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.601630 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602038 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602048 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602248 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602314 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.602467 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:21.102438611 +0000 UTC m=+22.349407279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.598293 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602718 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602735 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602756 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602862 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602893 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602924 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602955 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.602985 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603015 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603046 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603101 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603133 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603160 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603189 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603212 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603211 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603226 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603298 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603328 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603362 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603397 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603455 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603522 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603563 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603620 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.603992 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604074 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604097 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604117 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604891 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604155 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.604602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605025 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605187 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605203 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605225 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605089 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605314 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.605394 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605519 4768 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605629 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605867 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.605886 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606038 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606039 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606424 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606659 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606694 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606704 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.606954 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.607250 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.607600 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.607700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608036 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608230 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608696 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608722 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608845 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608901 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.608983 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609020 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.609266 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:21.10923696 +0000 UTC m=+22.356205618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609279 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609309 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609336 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609658 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609680 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609737 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609846 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.609981 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610069 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610116 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610183 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610268 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611692 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.611720 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610489 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.610631 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611607 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611147 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.611999 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612012 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612077 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.612116 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:21.112098405 +0000 UTC m=+22.359067063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612143 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613149 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613502 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.612707 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613599 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613677 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614063 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614077 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614111 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613506 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614187 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614190 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.613495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614222 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614509 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614025 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614579 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614681 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614683 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614729 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614739 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614754 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616178 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616214 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616268 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615041 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614763 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615094 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.614849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615192 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615504 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615629 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.615996 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616009 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616323 4768 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616668 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616850 4768 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616884 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616900 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616915 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616943 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616955 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616966 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616976 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.616989 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617001 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617010 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617021 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617042 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617051 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617060 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617069 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617080 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617089 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617099 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617111 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617120 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617128 4768 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617137 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617148 4768 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617158 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617167 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617177 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617189 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617199 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617208 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617219 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617228 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617238 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617246 4768 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617259 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617268 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617277 4768 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617286 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617297 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617316 4768 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617325 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617334 4768 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617348 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617356 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617365 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617392 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617402 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617411 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617419 4768 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617430 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617438 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.617446 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.620915 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.620932 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.621095 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.621201 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.621287 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.621373 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622317 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622452 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622375 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622634 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622655 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622675 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622693 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622715 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622741 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622786 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622804 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622823 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622841 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622859 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622876 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622895 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622912 4768 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622931 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622951 4768 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622968 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.622987 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623006 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623024 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623045 4768 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623062 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623082 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623099 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623116 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623133 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623150 4768 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623168 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.623196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.629734 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.630080 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.631319 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.631342 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.631356 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.631429 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:21.131410654 +0000 UTC m=+22.378379312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.632928 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.633452 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.633478 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.633650 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.633710 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:20 crc kubenswrapper[4768]: E1124 16:52:20.633815 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:21.133797307 +0000 UTC m=+22.380765965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.637994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.640225 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.642873 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.647720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.652503 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724563 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724574 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724583 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724591 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724597 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724639 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724651 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724662 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724670 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724678 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724687 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724696 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724705 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724713 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724722 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724731 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724740 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724749 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724757 4768 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724765 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724774 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724782 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724791 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724800 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724810 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724910 4768 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724929 4768 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724950 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724967 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.724984 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725001 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725019 4768 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725037 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725055 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725071 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725089 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725108 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725124 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725137 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725181 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725211 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725225 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725237 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725251 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725262 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725273 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725285 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725296 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725308 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725322 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725333 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725352 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725365 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725393 4768 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725406 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725420 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725432 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725445 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725458 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725472 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725486 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725499 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725511 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725524 4768 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725536 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725548 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725560 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725573 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725586 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725600 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725613 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725626 4768 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725639 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725650 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725663 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725675 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725686 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725698 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725709 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725721 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725765 4768 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725780 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725791 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725804 4768 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725816 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725826 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725840 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725851 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725862 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725875 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725886 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725899 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725908 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725919 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725929 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725941 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725951 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725962 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725972 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725982 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.725991 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.726002 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.726014 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.726025 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.726036 4768 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.726048 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.812425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.817905 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 16:52:20 crc kubenswrapper[4768]: W1124 16:52:20.824023 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f080e4d114d18b2c5c29ca54f152e2002b37712a94fb3addbea9a80f6521ade9 WatchSource:0}: Error finding container f080e4d114d18b2c5c29ca54f152e2002b37712a94fb3addbea9a80f6521ade9: Status 404 returned error can't find the container with id f080e4d114d18b2c5c29ca54f152e2002b37712a94fb3addbea9a80f6521ade9 Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.824301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 16:52:20 crc kubenswrapper[4768]: W1124 16:52:20.829376 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c8736e306c7207a7fd48b7a9e48327de8cbc290949fdc181dcc4f9a6dfceab37 WatchSource:0}: Error finding container c8736e306c7207a7fd48b7a9e48327de8cbc290949fdc181dcc4f9a6dfceab37: Status 404 returned error can't find the container with id c8736e306c7207a7fd48b7a9e48327de8cbc290949fdc181dcc4f9a6dfceab37 Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.893650 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.898574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.909374 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.919673 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.929323 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.934598 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.943681 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.956438 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.971247 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.981604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:20 crc kubenswrapper[4768]: I1124 16:52:20.991680 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.003328 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.020777 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.039527 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.052474 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.062953 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.129598 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.129667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.129709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.129778 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:22.129750029 +0000 UTC m=+23.376718687 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.129797 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.129859 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:22.129844632 +0000 UTC m=+23.376813290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.129859 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.129944 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:22.129927464 +0000 UTC m=+23.376896122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.230710 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.230759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.230879 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.230896 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.230907 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.230955 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:22.230941926 +0000 UTC m=+23.477910584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.230957 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.231002 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.231017 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.231094 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:22.23107082 +0000 UTC m=+23.478039488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.584242 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.584789 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.585610 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.586285 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.586846 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.587363 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.588092 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.588712 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.589352 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.589979 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.592599 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.593285 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.594111 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.594708 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.595209 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.596045 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.596686 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.597472 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.598040 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.598803 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.599396 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.599971 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.600438 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.601051 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.601578 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.602168 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.602818 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.603328 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.604000 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.604652 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.605087 4768 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.605192 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.608830 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.609281 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.610203 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.612922 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.614078 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.614790 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.615576 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.617687 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.618391 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.619671 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.620881 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.621649 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.622729 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.623431 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.624706 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.625641 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.627110 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.627755 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.628309 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.629689 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.630414 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.632134 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.725506 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.725577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.725592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c8736e306c7207a7fd48b7a9e48327de8cbc290949fdc181dcc4f9a6dfceab37"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.727799 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.727823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f080e4d114d18b2c5c29ca54f152e2002b37712a94fb3addbea9a80f6521ade9"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.729528 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.729993 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.731203 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230" exitCode=255 Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.731251 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230"} Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.731301 4768 scope.go:117] "RemoveContainer" containerID="3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.732999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5f9b0fe7e01143737a653bff407d40de65539ab221aef28af4d26b46c7cf949d"} Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.737740 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.739468 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.748780 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.757831 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.767622 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.776973 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.785616 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.796075 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.807966 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.819740 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.829550 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.841159 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.850078 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.858794 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.867033 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.910670 4768 scope.go:117] "RemoveContainer" containerID="906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230" Nov 24 16:52:21 crc kubenswrapper[4768]: E1124 16:52:21.910858 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 16:52:21 crc kubenswrapper[4768]: I1124 16:52:21.911011 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.007011 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wlblb"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.007422 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.011444 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.012865 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.016117 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.024124 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.035788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.038175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ae5decc-7de7-41db-9adf-b5551322c43a-hosts-file\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.038220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwp2\" (UniqueName: \"kubernetes.io/projected/0ae5decc-7de7-41db-9adf-b5551322c43a-kube-api-access-mgwp2\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.045907 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.063285 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.071709 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.089554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:04Z\\\",\\\"message\\\":\\\"W1124 16:52:03.109334 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 16:52:03.110045 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764003123 cert, and key in /tmp/serving-cert-1292433647/serving-signer.crt, /tmp/serving-cert-1292433647/serving-signer.key\\\\nI1124 16:52:03.414695 1 observer_polling.go:159] Starting file observer\\\\nW1124 16:52:03.427338 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 16:52:03.427677 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:03.428533 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1292433647/tls.crt::/tmp/serving-cert-1292433647/tls.key\\\\\\\"\\\\nF1124 16:52:03.961909 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.099671 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.109007 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.119236 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139686 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgwp2\" (UniqueName: \"kubernetes.io/projected/0ae5decc-7de7-41db-9adf-b5551322c43a-kube-api-access-mgwp2\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139737 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.139769 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:24.139741879 +0000 UTC m=+25.386710537 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.139811 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.139862 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:24.139847282 +0000 UTC m=+25.386815940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ae5decc-7de7-41db-9adf-b5551322c43a-hosts-file\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.139949 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0ae5decc-7de7-41db-9adf-b5551322c43a-hosts-file\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.139958 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.140005 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:24.139998156 +0000 UTC m=+25.386966814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.160207 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgwp2\" (UniqueName: \"kubernetes.io/projected/0ae5decc-7de7-41db-9adf-b5551322c43a-kube-api-access-mgwp2\") pod \"node-resolver-wlblb\" (UID: \"0ae5decc-7de7-41db-9adf-b5551322c43a\") " pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.240545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.240583 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240697 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240718 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240726 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240730 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240742 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240750 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240806 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:24.240772472 +0000 UTC m=+25.487741130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.240833 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:24.240828043 +0000 UTC m=+25.487796701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.318726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wlblb" Nov 24 16:52:22 crc kubenswrapper[4768]: W1124 16:52:22.329158 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae5decc_7de7_41db_9adf_b5551322c43a.slice/crio-3a156e4719a257aeb576f65e3c90dfbe968f02a4a70e3cf2c18006b586c168b3 WatchSource:0}: Error finding container 3a156e4719a257aeb576f65e3c90dfbe968f02a4a70e3cf2c18006b586c168b3: Status 404 returned error can't find the container with id 3a156e4719a257aeb576f65e3c90dfbe968f02a4a70e3cf2c18006b586c168b3 Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.406397 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jf255"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.406795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.411497 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-k8vfj"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.411917 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.413060 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.413328 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.413435 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.413894 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.413996 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.414075 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.417171 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.417192 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.418405 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.418469 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.418667 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-c6hmx"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.419281 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-98lk9"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.419520 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.420706 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.422338 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.424064 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.427138 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.427643 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.428248 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.428518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.428746 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.429641 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.429667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-binary-copy\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443805 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cni-binary-copy\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-etc-kubernetes\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443913 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-kubelet\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443957 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443973 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-system-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.443989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/517d8128-bef5-40a3-a786-5010780c2a58-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444053 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-multus\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444068 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgl7\" (UniqueName: \"kubernetes.io/projected/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-kube-api-access-pqgl7\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-conf-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444099 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444114 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444131 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdzd7\" (UniqueName: \"kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-netns\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444163 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4twm\" (UniqueName: \"kubernetes.io/projected/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-kube-api-access-f4twm\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444185 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/517d8128-bef5-40a3-a786-5010780c2a58-rootfs\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444215 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-os-release\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444232 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444247 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cnibin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444280 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-socket-dir-parent\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444303 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-multus-certs\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cnibin\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444403 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444440 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5s5\" (UniqueName: \"kubernetes.io/projected/517d8128-bef5-40a3-a786-5010780c2a58-kube-api-access-ts5s5\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-hostroot\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444738 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-os-release\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-k8s-cni-cncf-io\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444794 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-bin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-daemon-config\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/517d8128-bef5-40a3-a786-5010780c2a58-proxy-tls\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.444874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.445009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.445032 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-system-cni-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.451452 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.477428 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.493080 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.512836 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.541087 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/517d8128-bef5-40a3-a786-5010780c2a58-proxy-tls\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-system-cni-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547096 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-etc-kubernetes\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547113 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-binary-copy\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547128 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547145 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cni-binary-copy\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547195 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-kubelet\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547225 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-etc-kubernetes\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-system-cni-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-system-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/517d8128-bef5-40a3-a786-5010780c2a58-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547447 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-multus\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqgl7\" (UniqueName: \"kubernetes.io/projected/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-kube-api-access-pqgl7\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547483 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-conf-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547566 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdzd7\" (UniqueName: \"kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-netns\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547600 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4twm\" (UniqueName: \"kubernetes.io/projected/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-kube-api-access-f4twm\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/517d8128-bef5-40a3-a786-5010780c2a58-rootfs\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547670 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-os-release\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cnibin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-socket-dir-parent\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547761 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-multus-certs\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547782 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cnibin\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547816 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5s5\" (UniqueName: \"kubernetes.io/projected/517d8128-bef5-40a3-a786-5010780c2a58-kube-api-access-ts5s5\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-hostroot\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-bin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547961 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-daemon-config\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547982 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-os-release\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547230 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547999 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-k8s-cni-cncf-io\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548072 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-k8s-cni-cncf-io\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cni-binary-copy\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-binary-copy\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548494 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cnibin\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548531 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-cnibin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548665 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-multus-certs\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548717 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-system-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548744 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-cni-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.548652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-socket-dir-parent\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-daemon-config\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-multus\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549325 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-kubelet\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549362 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/517d8128-bef5-40a3-a786-5010780c2a58-rootfs\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549531 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-os-release\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-multus-conf-dir\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549578 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-run-netns\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-host-var-lib-cni-bin\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-os-release\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.547160 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/517d8128-bef5-40a3-a786-5010780c2a58-mcd-auth-proxy-config\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.549705 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-hostroot\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.552062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/517d8128-bef5-40a3-a786-5010780c2a58-proxy-tls\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.552616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.553318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.570751 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdzd7\" (UniqueName: \"kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7\") pod \"ovnkube-node-98lk9\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.572006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqgl7\" (UniqueName: \"kubernetes.io/projected/71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006-kube-api-access-pqgl7\") pod \"multus-additional-cni-plugins-c6hmx\" (UID: \"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\") " pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.572645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5s5\" (UniqueName: \"kubernetes.io/projected/517d8128-bef5-40a3-a786-5010780c2a58-kube-api-access-ts5s5\") pod \"machine-config-daemon-jf255\" (UID: \"517d8128-bef5-40a3-a786-5010780c2a58\") " pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.575663 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.580083 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4twm\" (UniqueName: \"kubernetes.io/projected/b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a-kube-api-access-f4twm\") pod \"multus-k8vfj\" (UID: \"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\") " pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.580286 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.580451 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.580816 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.580868 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.580961 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.581121 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.594076 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:04Z\\\",\\\"message\\\":\\\"W1124 16:52:03.109334 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 16:52:03.110045 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764003123 cert, and key in /tmp/serving-cert-1292433647/serving-signer.crt, /tmp/serving-cert-1292433647/serving-signer.key\\\\nI1124 16:52:03.414695 1 observer_polling.go:159] Starting file observer\\\\nW1124 16:52:03.427338 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 16:52:03.427677 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:03.428533 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1292433647/tls.crt::/tmp/serving-cert-1292433647/tls.key\\\\\\\"\\\\nF1124 16:52:03.961909 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.613405 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.627151 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.637637 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.653531 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.666904 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.688236 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.705607 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.719660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.722931 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.736791 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.741600 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-k8vfj" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.748876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"c883545c82ec98bc87fc616a59cc04b71c8ee90e63fb7f751a51cc26001e7ee1"} Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.754075 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.759384 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.762431 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wlblb" event={"ID":"0ae5decc-7de7-41db-9adf-b5551322c43a","Type":"ContainerStarted","Data":"158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d"} Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.762485 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wlblb" event={"ID":"0ae5decc-7de7-41db-9adf-b5551322c43a","Type":"ContainerStarted","Data":"3a156e4719a257aeb576f65e3c90dfbe968f02a4a70e3cf2c18006b586c168b3"} Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.764499 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.768253 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.769760 4768 scope.go:117] "RemoveContainer" containerID="906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230" Nov 24 16:52:22 crc kubenswrapper[4768]: E1124 16:52:22.769939 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.776606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.795873 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3499b2b9050db449c1854acde23142cbf3882e62c996652581f597552eafe7f3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:04Z\\\",\\\"message\\\":\\\"W1124 16:52:03.109334 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 16:52:03.110045 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764003123 cert, and key in /tmp/serving-cert-1292433647/serving-signer.crt, /tmp/serving-cert-1292433647/serving-signer.key\\\\nI1124 16:52:03.414695 1 observer_polling.go:159] Starting file observer\\\\nW1124 16:52:03.427338 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 16:52:03.427677 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:03.428533 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1292433647/tls.crt::/tmp/serving-cert-1292433647/tls.key\\\\\\\"\\\\nF1124 16:52:03.961909 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: W1124 16:52:22.797605 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c5817b_8ca1_4d97_8a2f_0ffc8e9a1006.slice/crio-d1ad0dd9d63bf689a4ea93a7c0c9b830d73f43fd491f0f3e957df718b95bb993 WatchSource:0}: Error finding container d1ad0dd9d63bf689a4ea93a7c0c9b830d73f43fd491f0f3e957df718b95bb993: Status 404 returned error can't find the container with id d1ad0dd9d63bf689a4ea93a7c0c9b830d73f43fd491f0f3e957df718b95bb993 Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.814103 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.819925 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.830623 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.840511 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.848705 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.849646 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.887044 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.923875 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.949348 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:22 crc kubenswrapper[4768]: I1124 16:52:22.975896 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:22.999906 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:22Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.012501 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.034662 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.052112 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.073900 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.092419 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.107024 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.122929 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.138172 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.156097 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.177582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.772819 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" exitCode=0 Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.772891 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.772935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"159e1a87394f186553e65b9f559112b99abb5025bb98eb3095bce647f632a919"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.774548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.776339 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerStarted","Data":"4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.776401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerStarted","Data":"8a68eb422362ca5eff4ad534219505bbebc5b62c37e532bd1eb24c1b4e2cb42e"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.778451 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.778490 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.780619 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc" exitCode=0 Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.780702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.780750 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerStarted","Data":"d1ad0dd9d63bf689a4ea93a7c0c9b830d73f43fd491f0f3e957df718b95bb993"} Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.794104 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.808538 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.824590 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.842230 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.857677 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.876872 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.894275 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.918453 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.935355 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.951777 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.970388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.989448 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:23 crc kubenswrapper[4768]: I1124 16:52:23.999841 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.021921 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.038826 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.053568 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.068401 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.084396 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.106202 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.121441 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.137108 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.155978 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.163779 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.163886 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.163925 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.163998 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:28.163968432 +0000 UTC m=+29.410937090 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.164017 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.164084 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:28.164070315 +0000 UTC m=+29.411038973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.164091 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.164181 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:28.164159547 +0000 UTC m=+29.411128205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.167309 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.184772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.196743 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.210203 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.224582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.236314 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.265410 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.265478 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265622 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265640 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265660 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265670 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265677 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265687 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265750 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:28.265729804 +0000 UTC m=+29.512698462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.265770 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:28.265762765 +0000 UTC m=+29.512731423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.580671 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.580810 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.580889 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.580923 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.581030 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:24 crc kubenswrapper[4768]: E1124 16:52:24.581082 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.727464 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ql7kf"] Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.728248 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.730510 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.730637 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.730761 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.730896 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.746000 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.770536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/630572ea-dec9-406a-9cca-da5ad59952b3-host\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.770603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmp5\" (UniqueName: \"kubernetes.io/projected/630572ea-dec9-406a-9cca-da5ad59952b3-kube-api-access-pqmp5\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.770703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/630572ea-dec9-406a-9cca-da5ad59952b3-serviceca\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.770828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.787796 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051" exitCode=0 Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.787873 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051"} Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.788735 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.792790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.792856 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.792874 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.792894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.802969 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.816562 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.827514 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.849062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.859827 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.870279 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.872140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/630572ea-dec9-406a-9cca-da5ad59952b3-host\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.872319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/630572ea-dec9-406a-9cca-da5ad59952b3-host\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.875744 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqmp5\" (UniqueName: \"kubernetes.io/projected/630572ea-dec9-406a-9cca-da5ad59952b3-kube-api-access-pqmp5\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.875920 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/630572ea-dec9-406a-9cca-da5ad59952b3-serviceca\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.878123 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/630572ea-dec9-406a-9cca-da5ad59952b3-serviceca\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.885156 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.900349 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqmp5\" (UniqueName: \"kubernetes.io/projected/630572ea-dec9-406a-9cca-da5ad59952b3-kube-api-access-pqmp5\") pod \"node-ca-ql7kf\" (UID: \"630572ea-dec9-406a-9cca-da5ad59952b3\") " pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.904815 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.925759 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.944554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.975199 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:24 crc kubenswrapper[4768]: I1124 16:52:24.996604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.009015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.023275 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.039220 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.042924 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ql7kf" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.055974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: W1124 16:52:25.056123 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod630572ea_dec9_406a_9cca_da5ad59952b3.slice/crio-d8555450efee78713b27cc07fdb5c465c61911fc9de2cf6e16b287dbc10b869f WatchSource:0}: Error finding container d8555450efee78713b27cc07fdb5c465c61911fc9de2cf6e16b287dbc10b869f: Status 404 returned error can't find the container with id d8555450efee78713b27cc07fdb5c465c61911fc9de2cf6e16b287dbc10b869f Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.078796 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.094651 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.108881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.124513 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.134672 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.154542 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.166473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.177913 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.190801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.204893 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.217842 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.797149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ql7kf" event={"ID":"630572ea-dec9-406a-9cca-da5ad59952b3","Type":"ContainerStarted","Data":"fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77"} Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.797209 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ql7kf" event={"ID":"630572ea-dec9-406a-9cca-da5ad59952b3","Type":"ContainerStarted","Data":"d8555450efee78713b27cc07fdb5c465c61911fc9de2cf6e16b287dbc10b869f"} Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.801667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.801713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.803747 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab" exitCode=0 Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.803788 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab"} Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.813514 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.830341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.843453 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.855056 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.865134 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.878297 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.889934 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.906971 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.931169 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.944036 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.956755 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.971613 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.983488 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:25 crc kubenswrapper[4768]: I1124 16:52:25.992758 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.009776 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.022583 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.039772 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.051543 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.071691 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.082165 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.109009 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.121173 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.132391 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.141866 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.155721 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.194581 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.236277 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.280256 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.318522 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.363231 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.383476 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.385728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.385765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.385775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.385874 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.393160 4768 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.393426 4768 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.394336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.394377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.394388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.394404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.394413 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.416008 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.419718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.419743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.419753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.419769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.419779 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.432384 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.438825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.438881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.438895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.438913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.438928 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.469970 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.474270 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.474324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.474337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.474373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.474386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.499166 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.504090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.504151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.504166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.504190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.504207 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.528120 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.528254 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.530480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.530509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.530519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.530536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.530549 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.580393 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.580411 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.580439 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.580524 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.580633 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:26 crc kubenswrapper[4768]: E1124 16:52:26.580705 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.633154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.633193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.633208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.633226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.633244 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.735907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.735954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.735966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.735984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.735996 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.811735 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5" exitCode=0 Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.811783 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.836020 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.838811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.838846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.838855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.838870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.838880 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.871158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.887169 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.907870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.926951 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.942033 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.943163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.943239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.943260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.943290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.943310 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:26Z","lastTransitionTime":"2025-11-24T16:52:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.972059 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:26 crc kubenswrapper[4768]: I1124 16:52:26.988999 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:26Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.003036 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.022198 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.040544 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.046262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.046322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.046341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.046402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.046432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.061249 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.074317 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.090537 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.105066 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.152246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.152322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.152340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.152416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.152470 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.256097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.256146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.256154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.256170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.256180 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.359242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.359386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.359409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.359438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.359491 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.462687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.462752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.462772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.462801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.462818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.565552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.565620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.565638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.565667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.565687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.669146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.669211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.669230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.669255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.669274 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.772798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.772853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.772870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.772896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.772914 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.820694 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11" exitCode=0 Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.820771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.827643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.838233 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.855877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.876097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.876138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.876152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.876172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.876185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.911578 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.933442 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.954779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.975136 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.980431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.980463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.980475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.980493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.980509 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:27Z","lastTransitionTime":"2025-11-24T16:52:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:27 crc kubenswrapper[4768]: I1124 16:52:27.992511 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:27Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.006298 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.017644 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.029476 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.048692 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.060236 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.078850 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.083067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.083117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.083127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.083144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.083153 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.091304 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.100548 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.143892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.144635 4768 scope.go:117] "RemoveContainer" containerID="906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.144794 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.185486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.185525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.185535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.185552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.185565 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.210953 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.211079 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.211057391 +0000 UTC m=+37.458026049 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.211143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.211198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.211286 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.211326 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.211318538 +0000 UTC m=+37.458287196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.211428 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.211935 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.211822611 +0000 UTC m=+37.458791279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.287805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.287852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.287860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.287875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.287884 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.312173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.312278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312505 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312550 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312567 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312589 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312602 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312614 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312702 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.312668909 +0000 UTC m=+37.559637737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.312744 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.312724281 +0000 UTC m=+37.559693179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.390260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.390295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.390303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.390318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.390329 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.492745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.492785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.492794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.492808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.492818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.580714 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.580744 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.580764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.580904 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.581087 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:28 crc kubenswrapper[4768]: E1124 16:52:28.581239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.595037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.595089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.595105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.595179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.595198 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.698493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.698555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.698573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.698601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.698624 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.801172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.801207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.801216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.801235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.801245 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.835906 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006" containerID="fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca" exitCode=0 Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.835964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerDied","Data":"fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.856440 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.880367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.900941 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.903947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.903995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.904009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.904035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.904059 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:28Z","lastTransitionTime":"2025-11-24T16:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.918460 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.932092 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.951710 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.965881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:28 crc kubenswrapper[4768]: I1124 16:52:28.988773 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.002257 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.005961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.005999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.006031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.006048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.006062 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.014925 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.027407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.037416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.046128 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.056940 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.067174 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.108458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.108492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.108501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.108514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.108523 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.211263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.211308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.211317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.211333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.211346 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.313486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.313528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.313539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.313555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.313566 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.416509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.418482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.418550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.418709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.418809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.522110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.522157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.522174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.522200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.522218 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.595720 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.613536 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.625014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.625051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.625063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.625078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.625091 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.627658 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.638153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.649655 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.664803 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.679841 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.700079 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.721752 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.726930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.726969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.726981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.726999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.727010 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.735610 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.745384 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.754870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.765137 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.775757 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.790926 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.829134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.829180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.829192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.829209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.829221 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.844029 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.844448 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.844607 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.848274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" event={"ID":"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006","Type":"ContainerStarted","Data":"13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.853967 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.868701 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.870610 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.877106 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.888327 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.899242 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.908536 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.918083 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.928118 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.931851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.931950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.932055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.932125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.932183 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:29Z","lastTransitionTime":"2025-11-24T16:52:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.944924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.959013 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.972443 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:29 crc kubenswrapper[4768]: I1124 16:52:29.986117 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:29.999932 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.011714 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.030810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.034763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.034817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.034832 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.034855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.034871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.048829 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.061256 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.077525 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.092940 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.103701 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.124927 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.137021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.137081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.137099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.137125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.137143 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.145390 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.162235 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.178760 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.191067 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.207643 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.218936 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.227779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.239565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.239612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.239624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.239642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.239656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.240249 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.251907 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.263997 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:30Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.342025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.342061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.342069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.342083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.342094 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.444054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.444110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.444127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.444150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.444167 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.547031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.547090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.547103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.547123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.547136 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.580703 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.580795 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:30 crc kubenswrapper[4768]: E1124 16:52:30.580832 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.580895 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:30 crc kubenswrapper[4768]: E1124 16:52:30.581033 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:30 crc kubenswrapper[4768]: E1124 16:52:30.581221 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.649462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.649526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.649542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.649562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.649575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.752845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.752915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.752938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.752968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.752990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.854202 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.855720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.855755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.855766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.855782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.855794 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.958642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.958674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.958683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.958696 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:30 crc kubenswrapper[4768]: I1124 16:52:30.958705 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:30Z","lastTransitionTime":"2025-11-24T16:52:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.062505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.062557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.062569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.062590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.062602 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.165048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.165083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.165095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.165111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.165123 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.267854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.267907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.267933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.267956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.267972 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.370657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.370686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.370704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.370719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.370729 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.472669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.472731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.472740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.472754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.472764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.575750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.575801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.575812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.575828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.575837 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.679339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.679426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.679446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.679471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.679490 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.782192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.782236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.782247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.782261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.782271 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.856952 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.885100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.885148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.885166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.885191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.885221 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.988919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.988975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.988987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.989009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:31 crc kubenswrapper[4768]: I1124 16:52:31.989022 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:31Z","lastTransitionTime":"2025-11-24T16:52:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.091908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.091963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.091972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.091993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.092005 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.195261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.195337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.195369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.195395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.195411 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.299318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.299384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.299398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.299418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.299428 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.403233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.403305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.403328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.403399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.403432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.506421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.506520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.506539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.506566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.506585 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.580639 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.580674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.580679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:32 crc kubenswrapper[4768]: E1124 16:52:32.580847 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:32 crc kubenswrapper[4768]: E1124 16:52:32.581000 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:32 crc kubenswrapper[4768]: E1124 16:52:32.581180 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.609033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.609087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.609103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.609131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.609151 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.712881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.712951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.712973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.713007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.713029 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.816184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.816249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.816267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.816294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.816313 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.863733 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/0.log" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.869105 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41" exitCode=1 Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.869178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.870505 4768 scope.go:117] "RemoveContainer" containerID="5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.895647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.918763 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:32Z","lastTransitionTime":"2025-11-24T16:52:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.950664 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.967496 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:32 crc kubenswrapper[4768]: I1124 16:52:32.983209 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.002413 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:32Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.021594 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.021720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.022026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.022038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.022057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.022069 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.042339 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.066219 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.081579 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.098432 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.124306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.124540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.124618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.124687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.124746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.126763 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.141967 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.155030 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.169285 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.227501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.227545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.227561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.227580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.227594 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.329717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.330132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.330304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.330506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.330577 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.433485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.433534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.433551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.433574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.433593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.535945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.536186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.536253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.536327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.536432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.638839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.638888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.638899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.638920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.638932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.741115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.741152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.741164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.741180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.741189 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.844034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.844389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.844398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.844414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.844424 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.874551 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/0.log" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.877833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.877990 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.902657 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.915477 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.933871 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.946298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.946378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.946390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.946426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.946440 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:33Z","lastTransitionTime":"2025-11-24T16:52:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.954983 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.968639 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.980139 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:33 crc kubenswrapper[4768]: I1124 16:52:33.992596 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:33Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.006793 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.019583 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.041190 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.049188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.049243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.049259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.049281 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.049298 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.060189 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.079365 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.095967 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.112472 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.126411 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.152515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.152579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.152605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.152640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.152663 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.255811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.255878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.255899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.255926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.255949 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.358166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.358238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.358262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.358295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.358489 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.461681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.461753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.461775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.461806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.461827 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.564587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.564635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.564645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.564661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.564674 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.579895 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.579941 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:34 crc kubenswrapper[4768]: E1124 16:52:34.580019 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:34 crc kubenswrapper[4768]: E1124 16:52:34.580155 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.579946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:34 crc kubenswrapper[4768]: E1124 16:52:34.580402 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.667154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.667212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.667223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.667243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.667257 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.770516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.770582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.770604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.770632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.770658 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.873739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.873794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.873810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.873839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.873856 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.882844 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/1.log" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.883942 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/0.log" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.887914 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774" exitCode=1 Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.887967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.888027 4768 scope.go:117] "RemoveContainer" containerID="5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.889962 4768 scope.go:117] "RemoveContainer" containerID="9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774" Nov 24 16:52:34 crc kubenswrapper[4768]: E1124 16:52:34.890239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.913147 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.934479 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.954873 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.975897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.975938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.975951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.975969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.975982 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:34Z","lastTransitionTime":"2025-11-24T16:52:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:34 crc kubenswrapper[4768]: I1124 16:52:34.985551 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:34Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.004868 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.024452 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.042218 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.060329 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.072724 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.079546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.079592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.079609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.079633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.079653 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.096374 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.117075 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.128905 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4"] Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.129752 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.132286 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.132544 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.134465 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.154940 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.172595 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.182014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.182065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.182083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.182107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.182128 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.184617 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.203607 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.218924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.221631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.221704 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.221775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.221852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhwn\" (UniqueName: \"kubernetes.io/projected/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-kube-api-access-cdhwn\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.249729 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.269279 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.285904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.285989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.286015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.286051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.286075 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.289904 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.305745 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.322134 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.323003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.323069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.323137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.323215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhwn\" (UniqueName: \"kubernetes.io/projected/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-kube-api-access-cdhwn\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.324192 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.324497 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.332040 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.337611 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.352602 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhwn\" (UniqueName: \"kubernetes.io/projected/1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9-kube-api-access-cdhwn\") pod \"ovnkube-control-plane-749d76644c-9wdz4\" (UID: \"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.357000 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.376508 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.388577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.388635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.388652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.388675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.388691 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.398507 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.421798 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.438258 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.448946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.457458 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: W1124 16:52:35.465888 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d5c3c1e_571c_4b97_8d0c_63a6c0c126d9.slice/crio-1417d63a9d495d087d1d3f11f16bc88ea992e9159336f6918b373c30ad1a07d0 WatchSource:0}: Error finding container 1417d63a9d495d087d1d3f11f16bc88ea992e9159336f6918b373c30ad1a07d0: Status 404 returned error can't find the container with id 1417d63a9d495d087d1d3f11f16bc88ea992e9159336f6918b373c30ad1a07d0 Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.480799 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.491192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.491261 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.491285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.491314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.491338 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.497991 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.593966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.594012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.594022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.594038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.594050 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.696607 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.696652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.696669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.696686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.696696 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.799073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.799111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.799123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.799154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.799165 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.893892 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" event={"ID":"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9","Type":"ContainerStarted","Data":"6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.893944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" event={"ID":"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9","Type":"ContainerStarted","Data":"89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.893954 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" event={"ID":"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9","Type":"ContainerStarted","Data":"1417d63a9d495d087d1d3f11f16bc88ea992e9159336f6918b373c30ad1a07d0"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.897119 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/1.log" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.901505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.901547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.901561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.901581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.901592 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:35Z","lastTransitionTime":"2025-11-24T16:52:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.915323 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.928143 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.940182 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.952414 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.970148 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:35 crc kubenswrapper[4768]: I1124 16:52:35.983589 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.001630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:35Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.004172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.004240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.004249 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.004264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.004273 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.014598 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.025925 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.036441 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.054743 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.068820 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.083394 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.096629 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.105891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.105927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.105938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.105954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.105966 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.108942 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.118472 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.208685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.208726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.208736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.208754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.208766 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.232420 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.232707 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.232632484 +0000 UTC m=+53.479601202 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.233047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.233219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.233311 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.233375 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.233437 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.233413217 +0000 UTC m=+53.480381915 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.233497 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.233466588 +0000 UTC m=+53.480435496 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.233808 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-275xl"] Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.234358 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.234428 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.248199 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.258722 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.269596 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.281711 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.295670 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.308158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.310858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.310908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.310920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.310940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.310957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.323536 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.333613 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.333958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.334072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.334150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69lz7\" (UniqueName: \"kubernetes.io/projected/ff18637c-91e0-4ea4-9f9a-53c5b0277927-kube-api-access-69lz7\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334103 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334241 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334256 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334304 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.334286185 +0000 UTC m=+53.581254843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334445 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334463 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334475 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.334511 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.334498241 +0000 UTC m=+53.581466909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.334677 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.347960 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.362267 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.379449 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.391808 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.413376 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.413969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.414009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.414024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.414046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.414063 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.435075 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.435583 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.435665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69lz7\" (UniqueName: \"kubernetes.io/projected/ff18637c-91e0-4ea4-9f9a-53c5b0277927-kube-api-access-69lz7\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.435854 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.435964 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:36.935945965 +0000 UTC m=+38.182914623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.451982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69lz7\" (UniqueName: \"kubernetes.io/projected/ff18637c-91e0-4ea4-9f9a-53c5b0277927-kube-api-access-69lz7\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.466570 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.482987 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.494011 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.517364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.517421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.517436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.517461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.517475 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.580598 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.580773 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.580622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.580895 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.580610 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.580995 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.620476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.620533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.620571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.620593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.620605 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.668984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.669031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.669043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.669063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.669079 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.687064 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.690974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.691078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.691143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.691212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.691290 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.706949 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.711285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.711327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.711341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.711370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.711380 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.728076 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.732098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.732149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.732162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.732181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.732194 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.750153 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.754184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.754286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.754362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.754443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.754515 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.772440 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:36Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.772772 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.774846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.774891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.774908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.774932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.774948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.877789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.878021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.878044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.878078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.878104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.941073 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.941310 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: E1124 16:52:36.941827 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:37.9418085 +0000 UTC m=+39.188777158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.981860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.981936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.981961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.981995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:36 crc kubenswrapper[4768]: I1124 16:52:36.982021 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:36Z","lastTransitionTime":"2025-11-24T16:52:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.085820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.085873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.085885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.085905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.085917 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.189241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.189651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.189669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.189694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.189718 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.291957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.292020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.292043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.292069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.292089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.393928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.393976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.393988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.394004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.394015 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.496493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.496534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.496545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.496561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.496571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.580832 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:37 crc kubenswrapper[4768]: E1124 16:52:37.581084 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.599053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.599092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.599101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.599117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.599126 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.701758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.701799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.701808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.701824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.701833 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.804382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.804421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.804430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.804446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.804467 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.906966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.907002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.907035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.907064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.907077 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:37Z","lastTransitionTime":"2025-11-24T16:52:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:37 crc kubenswrapper[4768]: I1124 16:52:37.953919 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:37 crc kubenswrapper[4768]: E1124 16:52:37.954103 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:37 crc kubenswrapper[4768]: E1124 16:52:37.954208 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:39.954180758 +0000 UTC m=+41.201149456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.009247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.009287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.009299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.009314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.009326 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.112089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.112186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.112210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.112245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.112270 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.214984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.215023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.215032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.215049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.215058 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.317637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.317704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.317741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.317768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.317786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.420774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.420822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.420834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.420850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.420863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.523377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.523411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.523418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.523451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.523461 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.580315 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.580412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.580412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:38 crc kubenswrapper[4768]: E1124 16:52:38.580457 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:38 crc kubenswrapper[4768]: E1124 16:52:38.580574 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:38 crc kubenswrapper[4768]: E1124 16:52:38.580699 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.626393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.626446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.626464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.626518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.626535 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.728649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.728708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.728726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.728750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.728767 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.831081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.831148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.831166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.831225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.831244 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.934286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.934654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.934792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.935105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:38 crc kubenswrapper[4768]: I1124 16:52:38.935234 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:38Z","lastTransitionTime":"2025-11-24T16:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.038235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.038313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.038325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.038359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.038373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.140338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.140452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.140475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.140509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.140534 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.242918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.242983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.243001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.243026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.243045 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.346269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.346332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.346382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.346408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.346432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.448948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.448991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.449002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.449020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.449031 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.551207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.551260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.551271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.551289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.551300 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.579831 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:39 crc kubenswrapper[4768]: E1124 16:52:39.580034 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.601133 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.623924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.644103 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.654068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.654109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.654123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.654145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.654162 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.680715 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.719108 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.737190 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.747983 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.755753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.755787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.755799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.755815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.755828 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.760645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.770702 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.786695 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.797177 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.815105 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.829687 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.843426 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.857628 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.858281 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.858315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.858327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.858358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.858368 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.869974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.881135 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:39Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.961495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.961540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.961553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.961572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.961587 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:39Z","lastTransitionTime":"2025-11-24T16:52:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:39 crc kubenswrapper[4768]: I1124 16:52:39.972595 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:39 crc kubenswrapper[4768]: E1124 16:52:39.972725 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:39 crc kubenswrapper[4768]: E1124 16:52:39.972779 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:43.972760975 +0000 UTC m=+45.219729633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.064602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.065415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.065562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.065742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.065868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.169544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.169623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.169639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.169665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.169685 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.272499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.272534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.272542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.272557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.272568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.376241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.376300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.376312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.376330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.376343 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.479438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.479504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.479522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.479547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.479568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.580162 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.580280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:40 crc kubenswrapper[4768]: E1124 16:52:40.580496 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.580534 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:40 crc kubenswrapper[4768]: E1124 16:52:40.581208 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:40 crc kubenswrapper[4768]: E1124 16:52:40.581290 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.581759 4768 scope.go:117] "RemoveContainer" containerID="906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.582590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.582632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.582648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.583313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.583426 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.686397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.686461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.686478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.686504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.686522 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.789479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.789520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.789531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.789547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.789559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.892409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.892453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.892468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.892490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.892506 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.920273 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.922931 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.923462 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.945471 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:40Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.959780 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:40Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.976844 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:40Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.994986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.995188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.995300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.995429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.995527 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:40Z","lastTransitionTime":"2025-11-24T16:52:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:40 crc kubenswrapper[4768]: I1124 16:52:40.997696 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:40Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.013709 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.029096 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.042653 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.064761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5905bbb5c45f6a117254020e1115bdf5a084b35b763c89e24c2124f32155bf41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:32Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1124 16:52:32.238121 6088 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI1124 16:52:32.238155 6088 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1124 16:52:32.238181 6088 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1124 16:52:32.238237 6088 factory.go:1336] Added *v1.Node event handler 7\\\\nI1124 16:52:32.238274 6088 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1124 16:52:32.238616 6088 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1124 16:52:32.238704 6088 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1124 16:52:32.238743 6088 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:52:32.238764 6088 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:52:32.238878 6088 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.079902 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.093528 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.098226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.098440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.098570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.098715 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.098831 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.109923 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.122203 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.131667 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.145037 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.155781 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.168282 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.179444 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:41Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.201707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.201968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.202053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.202132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.202218 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.305087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.305392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.305550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.305673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.305791 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.408601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.408977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.409322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.409559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.409684 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.512725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.513079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.513343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.513560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.513749 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.580265 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:41 crc kubenswrapper[4768]: E1124 16:52:41.580720 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.616250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.616301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.616318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.616342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.616391 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.719254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.719300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.719309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.719322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.719333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.821952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.822007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.822019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.822039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.822051 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.924511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.924550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.924557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.924572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:41 crc kubenswrapper[4768]: I1124 16:52:41.924581 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:41Z","lastTransitionTime":"2025-11-24T16:52:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.027713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.027768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.027785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.027808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.027828 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.134603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.134658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.134671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.134689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.134708 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.240317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.240382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.240393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.240411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.240424 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.344083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.344140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.344157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.344182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.344200 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.447489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.447542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.447560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.447585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.447603 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.551167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.551263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.551279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.551308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.551324 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.580905 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.581012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.580908 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:42 crc kubenswrapper[4768]: E1124 16:52:42.581157 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:42 crc kubenswrapper[4768]: E1124 16:52:42.581246 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:42 crc kubenswrapper[4768]: E1124 16:52:42.581410 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.654669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.654725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.654743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.654768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.654786 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.758695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.758791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.758817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.758855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.758881 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.862160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.862225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.862247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.862287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.862316 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.965212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.965266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.965284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.965308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:42 crc kubenswrapper[4768]: I1124 16:52:42.965325 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:42Z","lastTransitionTime":"2025-11-24T16:52:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.068473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.068535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.068559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.068588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.068606 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.172321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.172455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.172476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.172508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.172529 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.275824 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.275893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.275910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.275941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.275958 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.379011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.379075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.379092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.379117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.379134 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.482267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.482321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.482330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.482369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.482379 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.580780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:43 crc kubenswrapper[4768]: E1124 16:52:43.580952 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.585051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.585126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.585141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.585185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.585199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.604052 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.604905 4768 scope.go:117] "RemoveContainer" containerID="9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774" Nov 24 16:52:43 crc kubenswrapper[4768]: E1124 16:52:43.605078 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.620920 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.633317 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.644767 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.661914 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.681231 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.687547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.687596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.687609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.687631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.687649 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.702641 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.725133 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.739766 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.762530 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.782443 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.790794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.790861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.790879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.790905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.790925 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.797716 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.813180 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.845807 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.871180 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.893837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.893883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.893897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.893916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.893931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.900320 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.919897 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.933485 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:43Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.997162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.997302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.997327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.997394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:43 crc kubenswrapper[4768]: I1124 16:52:43.997421 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:43Z","lastTransitionTime":"2025-11-24T16:52:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.020167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:44 crc kubenswrapper[4768]: E1124 16:52:44.020455 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:44 crc kubenswrapper[4768]: E1124 16:52:44.020582 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:52:52.020552916 +0000 UTC m=+53.267521784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.100775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.100845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.100858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.100874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.100887 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.203718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.203766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.203775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.203792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.203805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.306483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.306536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.306547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.306566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.306579 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.408296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.408338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.408365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.408382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.408393 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.511129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.511158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.511165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.511176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.511184 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.580545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.580576 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:44 crc kubenswrapper[4768]: E1124 16:52:44.580645 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:44 crc kubenswrapper[4768]: E1124 16:52:44.580741 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.580588 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:44 crc kubenswrapper[4768]: E1124 16:52:44.580894 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.612852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.612917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.612939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.612967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.612991 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.715724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.715779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.715797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.715822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.715839 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.818522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.818566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.818577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.818618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.818631 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.921118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.921185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.921204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.921235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:44 crc kubenswrapper[4768]: I1124 16:52:44.921260 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:44Z","lastTransitionTime":"2025-11-24T16:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.024405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.024443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.024453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.024468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.024480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.127320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.127418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.127435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.127461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.127485 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.230543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.230617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.230635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.230666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.230704 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.334591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.334650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.334667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.334689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.334702 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.438288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.438343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.438381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.438401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.438417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.541849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.541935 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.541947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.541981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.541990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.580854 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:45 crc kubenswrapper[4768]: E1124 16:52:45.581020 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.645575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.645640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.645666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.645691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.645713 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.748646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.748710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.748729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.748754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.748773 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.851013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.851129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.851143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.851181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.851199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.954492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.954557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.954575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.954602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:45 crc kubenswrapper[4768]: I1124 16:52:45.954621 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:45Z","lastTransitionTime":"2025-11-24T16:52:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.057588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.057680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.057695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.057721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.057738 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.160285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.160320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.160328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.160343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.160365 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.263623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.263663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.263677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.263698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.263712 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.366325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.366430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.366450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.366476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.366495 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.469052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.469121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.469141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.469169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.469185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.572103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.572171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.572190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.572215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.572231 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.580677 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.580683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.580684 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.581072 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.581189 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.580872 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.675030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.675080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.675091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.675108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.675123 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.778301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.778401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.778420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.778445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.778462 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.881267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.881311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.881320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.881336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.881361 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.899235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.899293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.899310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.899334 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.899374 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.918952 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:46Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.924062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.924123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.924143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.924171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.924189 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.944772 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:46Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.948965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.949006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.949020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.949043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.949058 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.968781 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:46Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.973663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.973702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.973719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.973738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.973750 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:46 crc kubenswrapper[4768]: E1124 16:52:46.991763 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:46Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.996105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.996174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.996194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.996222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:46 crc kubenswrapper[4768]: I1124 16:52:46.996242 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:46Z","lastTransitionTime":"2025-11-24T16:52:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: E1124 16:52:47.012615 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:47Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:47 crc kubenswrapper[4768]: E1124 16:52:47.012848 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.015039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.015092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.015109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.015130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.015152 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.118211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.118260 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.118274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.118291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.118307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.221904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.221944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.221967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.221987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.222004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.324480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.324523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.324532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.324547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.324557 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.427470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.427513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.427524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.427539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.427549 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.530067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.530116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.530129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.530145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.530156 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.580277 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:47 crc kubenswrapper[4768]: E1124 16:52:47.580479 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.638234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.638298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.638318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.638373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.638404 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.742166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.742237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.742255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.742283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.742302 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.845695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.845752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.845764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.845782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.845793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.947623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.948186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.948343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.948528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:47 crc kubenswrapper[4768]: I1124 16:52:47.948665 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:47Z","lastTransitionTime":"2025-11-24T16:52:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.052110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.052175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.052190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.052211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.052224 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.154920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.154984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.155006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.155038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.155061 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.257557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.257629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.257647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.257674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.257694 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.360907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.360973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.360991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.361015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.361032 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.467617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.467672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.467686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.467704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.467715 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.570427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.570475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.570492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.570516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.570533 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.580316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.580336 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:48 crc kubenswrapper[4768]: E1124 16:52:48.580460 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.580481 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:48 crc kubenswrapper[4768]: E1124 16:52:48.580613 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:48 crc kubenswrapper[4768]: E1124 16:52:48.580728 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.673301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.673340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.673366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.673383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.673396 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.776240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.776299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.776315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.776339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.776384 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.879112 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.879177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.879198 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.879226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.879245 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.982006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.982043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.982054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.982071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:48 crc kubenswrapper[4768]: I1124 16:52:48.982082 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:48Z","lastTransitionTime":"2025-11-24T16:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.085426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.085464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.085474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.085491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.085549 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.188120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.188182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.188203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.188233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.188253 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.290751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.290802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.290814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.290830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.290856 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.393509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.393545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.393556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.393575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.393587 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.496613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.496647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.496655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.496670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.496679 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.580198 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:49 crc kubenswrapper[4768]: E1124 16:52:49.580618 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599471 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.599899 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.618981 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.632389 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.662787 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.688564 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.702597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.702650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.702669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.702696 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.702717 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.724007 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.744464 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.763143 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.781866 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.797038 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.805774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.805828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.805846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.805872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.805892 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.812451 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.830239 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.848762 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.865622 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.881377 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.894897 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.908849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.908912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.908932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.908964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.908985 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:49Z","lastTransitionTime":"2025-11-24T16:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:49 crc kubenswrapper[4768]: I1124 16:52:49.911700 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:49Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.011527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.011596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.011613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.011639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.011660 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.115556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.115618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.115640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.115667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.115688 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.218592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.218700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.218723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.218768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.218789 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.322126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.322177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.322189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.322214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.322227 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.430714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.430767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.430783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.430806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.430823 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.533182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.533244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.533264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.533289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.533307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.580049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.580049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:50 crc kubenswrapper[4768]: E1124 16:52:50.580243 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:50 crc kubenswrapper[4768]: E1124 16:52:50.580299 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.580073 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:50 crc kubenswrapper[4768]: E1124 16:52:50.580501 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.635504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.635565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.635584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.635609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.635628 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.738754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.738828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.738850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.738877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.738900 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.846231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.846512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.847140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.847167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.847186 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.950401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.950607 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.950640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.950670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:50 crc kubenswrapper[4768]: I1124 16:52:50.950691 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:50Z","lastTransitionTime":"2025-11-24T16:52:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.053972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.054026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.054043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.054068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.054086 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.156864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.156901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.156911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.156925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.156935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.260158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.260226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.260252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.260285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.260307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.363682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.363745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.363764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.363791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.363812 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.466962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.467030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.467050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.467078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.467100 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.570806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.570869 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.570888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.570914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.570931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.580272 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:51 crc kubenswrapper[4768]: E1124 16:52:51.580501 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.673878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.673989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.674012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.674041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.674062 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.778187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.778268 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.778294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.778327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.778386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.880927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.880984 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.881000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.881025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.881042 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.982934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.982992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.983009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.983037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:51 crc kubenswrapper[4768]: I1124 16:52:51.983054 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:51Z","lastTransitionTime":"2025-11-24T16:52:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.086292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.086689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.086842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.086983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.087150 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.099341 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.099593 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.099723 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:08.099694283 +0000 UTC m=+69.346662991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.189870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.189927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.189945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.189970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.189990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.291964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.292027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.292043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.292069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.292087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.301491 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.301630 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:53:24.301605084 +0000 UTC m=+85.548573772 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.301976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.302090 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.302100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.302183 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:24.3021551 +0000 UTC m=+85.549123788 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.302325 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.302452 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:24.302420778 +0000 UTC m=+85.549389466 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.395202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.395254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.395271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.395294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.395316 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.403040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.403084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403203 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403235 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403251 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403254 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403291 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403310 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403321 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:24.403301487 +0000 UTC m=+85.650270155 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.403406 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:24.403383949 +0000 UTC m=+85.650352647 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.498209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.498250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.498267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.498294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.498312 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.580463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.580543 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.580550 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.580602 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.580656 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:52 crc kubenswrapper[4768]: E1124 16:52:52.580853 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.601488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.601536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.601547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.601564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.601576 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.703843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.703890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.703906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.703928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.703946 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.806531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.806583 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.806599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.806625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.806642 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.909265 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.909320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.909336 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.909402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:52 crc kubenswrapper[4768]: I1124 16:52:52.909423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:52Z","lastTransitionTime":"2025-11-24T16:52:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.012538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.012808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.012826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.012855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.012874 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.116537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.116587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.116603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.116627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.116644 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.219781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.219849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.219867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.219894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.219912 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.323926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.323998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.324013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.324037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.324052 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.427459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.427518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.427534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.427561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.427578 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.529811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.529882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.529900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.529926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.529944 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.580593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:53 crc kubenswrapper[4768]: E1124 16:52:53.580838 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.632233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.632794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.632815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.632844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.632861 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.736564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.736621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.736635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.736663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.736680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.840763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.840869 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.840927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.840953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.840976 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.944549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.944636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.944658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.944697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:53 crc kubenswrapper[4768]: I1124 16:52:53.944719 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:53Z","lastTransitionTime":"2025-11-24T16:52:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.035676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.048766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.048822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.048836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.048855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.048869 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.052557 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.064140 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.078705 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.093222 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.114128 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.138166 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.152088 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.152151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.152163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.152180 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.152190 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.157531 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.178041 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.197666 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.214870 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.230502 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.250736 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.254881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.254945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.254959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.254983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.254995 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.268265 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.293933 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.313389 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.340924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.361655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.361825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.361838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.361858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.361873 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.364734 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.380849 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:54Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.464884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.464926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.464936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.464953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.464965 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.567554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.567598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.567614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.567634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.567650 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.580084 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.580178 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.580250 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.580947 4768 scope.go:117] "RemoveContainer" containerID="9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774" Nov 24 16:52:54 crc kubenswrapper[4768]: E1124 16:52:54.581592 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:54 crc kubenswrapper[4768]: E1124 16:52:54.581697 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:54 crc kubenswrapper[4768]: E1124 16:52:54.582005 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.670698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.670761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.670780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.670805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.670822 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.774453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.774500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.774516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.774541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.774559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.881322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.881387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.881399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.881425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.881441 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.974710 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/1.log" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.982525 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d"} Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.983552 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.985118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.985176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.985195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.985223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:54 crc kubenswrapper[4768]: I1124 16:52:54.985245 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:54Z","lastTransitionTime":"2025-11-24T16:52:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.004712 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.029927 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.052719 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.077384 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.088691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.088841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.088947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.089064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.089168 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.095934 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.126310 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.141526 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.158230 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.186035 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.191476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.191527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.191539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.191561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.191576 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.202229 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.217409 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.231948 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.245296 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.261452 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.275648 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.291487 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.294033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.294076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.294090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.294115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.294131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.304982 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.349604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:55Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.397053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.397117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.397128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.397151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.397168 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.500188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.500564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.500627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.500690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.500767 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.580690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:55 crc kubenswrapper[4768]: E1124 16:52:55.580850 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.603886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.603932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.603943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.603977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.603992 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.707633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.707681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.707695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.707714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.707725 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.810659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.810713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.810726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.810745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.810760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.914250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.914305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.914315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.914333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.914359 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:55Z","lastTransitionTime":"2025-11-24T16:52:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.989136 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/2.log" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.989763 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/1.log" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.993206 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" exitCode=1 Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.993264 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d"} Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.993347 4768 scope.go:117] "RemoveContainer" containerID="9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774" Nov 24 16:52:55 crc kubenswrapper[4768]: I1124 16:52:55.993924 4768 scope.go:117] "RemoveContainer" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" Nov 24 16:52:55 crc kubenswrapper[4768]: E1124 16:52:55.994085 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.016242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.016282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.016292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.016307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.016317 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.019820 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.033930 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.059796 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a9be79a1f32dfbc415c53d89f8657a7928038f7e0d8f51be4632f688c5a7774\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:34Z\\\",\\\"message\\\":\\\"8338 6223 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1124 16:52:33.918391 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918404 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918409 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 16:52:33.918414 6223 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1124 16:52:33.918428 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1124 16:52:33.918437 6223 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 16:52:33.918445 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nF1124 16:52:33.918463 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.077658 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.097718 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.115679 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.118833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.118882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.118896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.118918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.118934 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.133449 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.148087 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.162098 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.181341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.194405 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.208161 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.221480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.221524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.221537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.221582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.221597 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.234972 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.268861 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.290434 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.310234 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.324572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.324616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.324625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.324640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.324652 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.331877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.348659 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:56Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.428492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.428568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.428588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.428620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.428641 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.532905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.532988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.533009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.533039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.533062 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.580445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.581182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.581288 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:56 crc kubenswrapper[4768]: E1124 16:52:56.583279 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:56 crc kubenswrapper[4768]: E1124 16:52:56.583933 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:56 crc kubenswrapper[4768]: E1124 16:52:56.584450 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.636951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.637011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.637071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.637104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.637127 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.740460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.740515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.740532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.740558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.740578 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.843518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.843604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.843623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.843649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.843680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.947395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.947473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.947497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.947532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:56 crc kubenswrapper[4768]: I1124 16:52:56.947561 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:56Z","lastTransitionTime":"2025-11-24T16:52:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.001909 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/2.log" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.008093 4768 scope.go:117] "RemoveContainer" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.008322 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.031051 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.050324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.050394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.050405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.050422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.051110 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.052977 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.073963 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.088658 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.102979 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.117151 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.118999 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.137837 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.153672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.153728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.153741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.153764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.153779 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.155763 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.171221 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.184249 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.201777 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.215835 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.229593 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.242116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.242167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.242177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.242193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.242204 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.245097 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.254882 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258109 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258874 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.258970 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.271938 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.273547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.276084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.276115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.276128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.276146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.276159 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.287280 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.288870 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.293450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.293481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.293491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.293506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.293517 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.305974 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309973 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.309902 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.325756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.333166 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.333335 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.335287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.335365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.335378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.335396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.335406 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.339982 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.353946 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.367111 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.379225 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.394608 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.407806 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.422107 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438441 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.438717 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.452100 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.468950 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.498172 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.512622 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.527660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.540477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.540509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.540518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.540533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.540542 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.544775 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.562862 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.576341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.580306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:57 crc kubenswrapper[4768]: E1124 16:52:57.580474 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.607875 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:57Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.643008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.643041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.643052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.643068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.643079 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.745671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.745734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.745759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.745790 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.745812 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.848047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.848134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.848159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.848192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.848215 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.950840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.950885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.950896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.950913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:57 crc kubenswrapper[4768]: I1124 16:52:57.950931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:57Z","lastTransitionTime":"2025-11-24T16:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.054027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.054109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.054128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.054158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.054176 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.156126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.156160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.156185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.156200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.156211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.258590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.258631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.258639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.258656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.258665 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.361425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.361466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.361477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.361493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.361504 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.463392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.463430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.463438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.463451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.463459 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.566087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.566152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.566170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.566196 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.566217 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.580779 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.580833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:52:58 crc kubenswrapper[4768]: E1124 16:52:58.580891 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.580832 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:52:58 crc kubenswrapper[4768]: E1124 16:52:58.581067 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:52:58 crc kubenswrapper[4768]: E1124 16:52:58.581207 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.669852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.670090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.670124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.670297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.670326 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.772608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.772647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.772656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.772670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.772680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.875256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.875317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.875331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.875377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.875417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.979808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.979950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.979980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.980063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:58 crc kubenswrapper[4768]: I1124 16:52:58.980094 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:58Z","lastTransitionTime":"2025-11-24T16:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.084057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.084444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.084611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.084768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.084901 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.188931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.188966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.188974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.188989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.188998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.291196 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.291255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.291271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.291295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.291311 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.394849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.394913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.394931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.394963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.394984 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.498570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.498627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.498639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.498660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.498674 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.579767 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:52:59 crc kubenswrapper[4768]: E1124 16:52:59.579934 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601257 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.601185 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.617737 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.638906 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.658779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.674408 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.704556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.704615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.704628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.704651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.704665 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.711416 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.728892 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.745787 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.767199 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.783122 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.799497 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.807925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.807998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.808024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.808078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.808115 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.819711 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.851078 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.867529 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.882571 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.896495 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.911153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.911217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.911230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.911258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.911272 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:52:59Z","lastTransitionTime":"2025-11-24T16:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.912958 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:52:59 crc kubenswrapper[4768]: I1124 16:52:59.929740 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:52:59Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.015108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.015151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.015160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.015176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.015186 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.118493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.118543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.118561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.118590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.118613 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.221428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.221459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.221469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.221483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.221494 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.323980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.324060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.324078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.324106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.324127 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.427532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.427594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.427608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.427631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.427647 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.531678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.531739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.531758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.531785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.531809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.580691 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.580775 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.580694 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:00 crc kubenswrapper[4768]: E1124 16:53:00.580901 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:00 crc kubenswrapper[4768]: E1124 16:53:00.581085 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:00 crc kubenswrapper[4768]: E1124 16:53:00.581211 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.634920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.634987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.635005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.635029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.635046 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.738404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.738489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.738513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.738544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.738568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.842186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.842248 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.842268 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.842293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.842312 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.945622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.945685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.945698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.945722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:00 crc kubenswrapper[4768]: I1124 16:53:00.945738 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:00Z","lastTransitionTime":"2025-11-24T16:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.049115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.049258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.049291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.049332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.049409 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.155181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.155308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.155331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.155391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.155424 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.260474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.260534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.260552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.260587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.260606 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.363653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.363714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.363732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.363758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.363777 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.467688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.467754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.467772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.467799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.467818 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.571548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.571617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.571635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.571658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.571676 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.580091 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:01 crc kubenswrapper[4768]: E1124 16:53:01.580260 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.675069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.675126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.675137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.675154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.675164 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.778382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.778460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.778483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.778516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.778547 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.882108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.882238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.882258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.882289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.882309 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.985610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.985658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.985669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.985687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:01 crc kubenswrapper[4768]: I1124 16:53:01.985702 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:01Z","lastTransitionTime":"2025-11-24T16:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.088732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.088776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.088789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.088807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.088821 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.192250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.192404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.192437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.192467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.192491 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.295717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.295797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.295821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.295855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.295876 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.399334 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.399456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.399478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.399509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.399536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.503026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.503093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.503107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.503132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.503146 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.580157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:02 crc kubenswrapper[4768]: E1124 16:53:02.580403 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.580734 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:02 crc kubenswrapper[4768]: E1124 16:53:02.580835 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.581047 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:02 crc kubenswrapper[4768]: E1124 16:53:02.581145 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.606039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.606089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.606106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.606132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.606150 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.709007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.709064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.709085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.709115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.709136 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.812079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.812146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.812179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.812209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.812231 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.916508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.916563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.916573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.916592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:02 crc kubenswrapper[4768]: I1124 16:53:02.916605 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:02Z","lastTransitionTime":"2025-11-24T16:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.020794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.020878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.020899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.020925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.020947 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.123977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.124044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.124065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.124092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.124111 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.227816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.227876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.227890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.227911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.227926 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.331748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.331821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.331834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.331863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.331886 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.435493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.435551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.435564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.435630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.435650 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.538181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.538250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.538331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.538398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.538428 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.580680 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:03 crc kubenswrapper[4768]: E1124 16:53:03.580949 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.641960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.642017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.642030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.642049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.642061 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.745126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.745181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.745195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.745214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.745227 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.848474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.848550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.848564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.848582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.848593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.951734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.951768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.951779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.951795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:03 crc kubenswrapper[4768]: I1124 16:53:03.951806 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:03Z","lastTransitionTime":"2025-11-24T16:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.054709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.054799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.054827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.054864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.054895 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.157954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.158041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.158063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.158090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.158112 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.262028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.262388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.262545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.262698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.262847 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.366650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.366712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.366724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.366748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.366763 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.470058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.470095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.470106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.470120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.470130 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.573204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.573243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.573253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.573268 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.573278 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.579729 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:04 crc kubenswrapper[4768]: E1124 16:53:04.579841 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.579975 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:04 crc kubenswrapper[4768]: E1124 16:53:04.580027 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.579980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:04 crc kubenswrapper[4768]: E1124 16:53:04.580084 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.676531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.676577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.676589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.676606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.676620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.779566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.779638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.779657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.779684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.779702 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.883979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.884035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.884056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.884081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.884103 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.987391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.987450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.987470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.987496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:04 crc kubenswrapper[4768]: I1124 16:53:04.987515 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:04Z","lastTransitionTime":"2025-11-24T16:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.090952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.091023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.091041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.091067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.091087 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.194402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.194470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.194490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.194517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.194536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.297066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.297102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.297111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.297127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.297138 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.400491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.400561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.400574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.400595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.400609 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.503252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.503316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.503333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.503392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.503412 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.580477 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:05 crc kubenswrapper[4768]: E1124 16:53:05.580689 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.606038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.606127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.606144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.606169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.606185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.709702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.709817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.709837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.709860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.709877 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.813256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.813303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.813320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.813381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.813400 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.917104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.917164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.917183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.917208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:05 crc kubenswrapper[4768]: I1124 16:53:05.917226 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:05Z","lastTransitionTime":"2025-11-24T16:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.020787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.020861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.020885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.020915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.020936 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.124391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.124461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.124479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.124511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.124532 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.227757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.228079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.228151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.228222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.228290 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.332404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.332438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.332447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.332464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.332474 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.434639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.434685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.434694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.434709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.434719 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.538132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.538199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.538219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.538254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.538290 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.580489 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.580594 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.580497 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:06 crc kubenswrapper[4768]: E1124 16:53:06.580846 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:06 crc kubenswrapper[4768]: E1124 16:53:06.580826 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:06 crc kubenswrapper[4768]: E1124 16:53:06.581012 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.641707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.641759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.641776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.641797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.641811 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.744483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.744536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.744552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.744584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.744599 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.848041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.848092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.848102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.848123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.848135 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.951662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.951716 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.951728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.951745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:06 crc kubenswrapper[4768]: I1124 16:53:06.951756 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:06Z","lastTransitionTime":"2025-11-24T16:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.054805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.054850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.054878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.054898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.054910 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.157719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.157776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.157789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.157815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.157830 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.261252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.261301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.261317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.261343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.261373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.364639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.364715 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.364735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.364768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.364792 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.391491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.391539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.391549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.391570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.391581 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.406807 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:07Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.412601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.412664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.412686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.412714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.412734 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.432078 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:07Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.437070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.437110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.437139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.437160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.437172 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.454186 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:07Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.458316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.458457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.458494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.458524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.458542 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.474050 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:07Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.477460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.477491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.477505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.477525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.477538 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.491290 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:07Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.491412 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.493134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.493206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.493225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.493251 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.493276 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.580474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:07 crc kubenswrapper[4768]: E1124 16:53:07.580688 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.595388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.595481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.595498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.595575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.595596 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.698106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.698152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.698164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.698189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.698204 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.801038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.801107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.801125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.801154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.801171 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.904321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.904422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.904442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.904466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:07 crc kubenswrapper[4768]: I1124 16:53:07.904484 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:07Z","lastTransitionTime":"2025-11-24T16:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.007123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.007175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.007187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.007207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.007220 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.110401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.110486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.110533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.110558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.110575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.184873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:08 crc kubenswrapper[4768]: E1124 16:53:08.185107 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:53:08 crc kubenswrapper[4768]: E1124 16:53:08.185205 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:53:40.185178951 +0000 UTC m=+101.432147639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.213152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.213179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.213188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.213203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.213213 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.315989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.316030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.316043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.316060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.316073 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.418827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.418873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.418882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.418900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.418915 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.521473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.521522 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.521535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.521556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.521570 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.580158 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.580158 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.580266 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:08 crc kubenswrapper[4768]: E1124 16:53:08.580872 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:08 crc kubenswrapper[4768]: E1124 16:53:08.581004 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:08 crc kubenswrapper[4768]: E1124 16:53:08.581216 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.624140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.624186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.624202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.624225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.624243 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.727380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.727463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.727486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.727515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.727536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.830045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.830104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.830121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.830145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.830163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.932386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.932420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.932430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.932448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:08 crc kubenswrapper[4768]: I1124 16:53:08.932458 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:08Z","lastTransitionTime":"2025-11-24T16:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.035555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.035608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.035617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.035635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.035645 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.138226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.138547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.138617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.138681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.138744 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.241712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.241764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.241781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.241805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.241822 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.344405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.344461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.344479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.344501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.344518 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.447840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.447896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.447912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.447937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.447956 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.550095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.550140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.550151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.550172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.550185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.580520 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:09 crc kubenswrapper[4768]: E1124 16:53:09.581062 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.581452 4768 scope.go:117] "RemoveContainer" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" Nov 24 16:53:09 crc kubenswrapper[4768]: E1124 16:53:09.581868 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.594389 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.611412 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.622667 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.635965 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.649805 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.652598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.652650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.652664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.652687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.652703 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.670552 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.691404 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.702574 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.721107 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.735149 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.755719 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.757630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.757924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.758019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.758123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.758206 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.770577 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.782672 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.796955 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.820535 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.831872 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.849289 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.860017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.860123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.860144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.860171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.860189 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.861540 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:09Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.962933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.963049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.963073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.963106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:09 crc kubenswrapper[4768]: I1124 16:53:09.963153 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:09Z","lastTransitionTime":"2025-11-24T16:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.056798 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/0.log" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.056853 4768 generic.go:334] "Generic (PLEG): container finished" podID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" containerID="4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5" exitCode=1 Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.056895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerDied","Data":"4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.057280 4768 scope.go:117] "RemoveContainer" containerID="4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.064712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.064976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.064985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.065002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.065011 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.073868 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.096551 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.111464 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.123472 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.136623 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.147815 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.157703 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.167879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.167910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.167921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.167936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.167946 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.173739 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.183463 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.194853 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.205165 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.221321 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.231182 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.243909 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.253610 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.265813 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.271117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.271143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.271152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.271166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.271176 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.277596 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.288736 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:10Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.373654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.373691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.373706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.373729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.373746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.477120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.477157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.477165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.477183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.477194 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.580025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.580204 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:10 crc kubenswrapper[4768]: E1124 16:53:10.580342 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.580481 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:10 crc kubenswrapper[4768]: E1124 16:53:10.580594 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:10 crc kubenswrapper[4768]: E1124 16:53:10.580880 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.581211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.581239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.581252 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.581271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.581285 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.684929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.684971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.684982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.684998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.685012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.788463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.788512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.788525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.788546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.788561 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.891183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.891248 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.891259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.891278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.891295 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.994794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.994847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.994860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.994877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:10 crc kubenswrapper[4768]: I1124 16:53:10.994891 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:10Z","lastTransitionTime":"2025-11-24T16:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.062471 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/0.log" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.062526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerStarted","Data":"e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.075815 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.088583 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.097587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.097618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.097629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.097644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.097658 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.106153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.118767 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.133408 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.149005 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.161949 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.174140 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.193301 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.199899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.199932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.199943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.199961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.199971 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.208821 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.236360 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.254403 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.266429 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.280555 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.296553 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.302292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.302319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.302328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.302359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.302369 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.311383 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.324389 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.344761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:11Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.405399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.405464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.405486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.405519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.405538 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.508120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.508796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.508836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.508893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.508912 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.580783 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:11 crc kubenswrapper[4768]: E1124 16:53:11.581554 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.611904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.611971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.611989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.612014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.612032 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.714561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.714641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.714659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.714689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.714708 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.817904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.818409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.818649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.818876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.819077 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.922878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.922939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.922960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.922990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:11 crc kubenswrapper[4768]: I1124 16:53:11.923006 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:11Z","lastTransitionTime":"2025-11-24T16:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.026039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.026092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.026110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.026134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.026191 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.128785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.128840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.128857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.128879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.128898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.231802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.231870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.231892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.231922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.231945 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.333967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.334023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.334041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.334068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.334089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.436225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.436301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.436319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.436383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.436402 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.539007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.539052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.539069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.539092 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.539110 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.580239 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.580344 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.580251 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:12 crc kubenswrapper[4768]: E1124 16:53:12.580611 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:12 crc kubenswrapper[4768]: E1124 16:53:12.580715 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:12 crc kubenswrapper[4768]: E1124 16:53:12.580873 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.641703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.641765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.641788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.641816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.641837 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.745039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.745109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.745134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.745166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.745190 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.848423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.848504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.848524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.848581 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.848599 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.951748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.951812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.951835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.951867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:12 crc kubenswrapper[4768]: I1124 16:53:12.951890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:12Z","lastTransitionTime":"2025-11-24T16:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.054479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.054542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.054560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.054586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.054607 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.157597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.157664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.157683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.157712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.157730 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.261192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.261272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.261297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.261329 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.261401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.365083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.365161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.365186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.365223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.365249 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.467900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.467962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.467981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.468007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.468026 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.570756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.570817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.570837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.570865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.570887 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.580432 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:13 crc kubenswrapper[4768]: E1124 16:53:13.580757 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.673890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.673998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.674025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.674063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.674092 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.777591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.777703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.777724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.777796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.777819 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.881253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.881339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.881379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.881404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.881421 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.985276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.985322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.985331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.985363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:13 crc kubenswrapper[4768]: I1124 16:53:13.985373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:13Z","lastTransitionTime":"2025-11-24T16:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.088452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.088519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.088535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.088561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.088581 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.191819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.191891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.191910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.191937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.191957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.295808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.295872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.295891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.295918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.295936 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.399286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.399404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.399438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.399502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.399530 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.502432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.502481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.502494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.502512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.502524 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.580319 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.580394 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.580324 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:14 crc kubenswrapper[4768]: E1124 16:53:14.580543 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:14 crc kubenswrapper[4768]: E1124 16:53:14.580678 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:14 crc kubenswrapper[4768]: E1124 16:53:14.580827 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.604806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.604858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.604877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.604901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.604919 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.707639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.707705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.707730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.707761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.707783 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.810875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.810949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.810974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.811010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.811032 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.913609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.913643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.913655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.913672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:14 crc kubenswrapper[4768]: I1124 16:53:14.913686 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:14Z","lastTransitionTime":"2025-11-24T16:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.017381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.017459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.017476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.017502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.017520 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.120285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.120421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.120440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.120465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.120481 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.224091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.224155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.224173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.224201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.224218 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.326919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.327012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.327045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.327076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.327101 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.430575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.430660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.430688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.430720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.430744 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.533906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.533979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.534035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.534061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.534076 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.579973 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:15 crc kubenswrapper[4768]: E1124 16:53:15.580239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.664746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.664805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.664823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.664852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.664875 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.768117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.768187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.768280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.768310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.768333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.871812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.871872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.871888 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.871912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.871931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.974633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.974688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.974705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.974729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:15 crc kubenswrapper[4768]: I1124 16:53:15.974748 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:15Z","lastTransitionTime":"2025-11-24T16:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.078034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.078125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.078143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.078215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.078234 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.181810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.181892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.181939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.181975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.181998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.285599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.285642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.285654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.285672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.285687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.389192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.389285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.389309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.389341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.389394 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.492702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.492752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.492768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.492794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.492812 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.580217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.580238 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.580238 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:16 crc kubenswrapper[4768]: E1124 16:53:16.580496 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:16 crc kubenswrapper[4768]: E1124 16:53:16.580593 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:16 crc kubenswrapper[4768]: E1124 16:53:16.580657 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.595506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.595579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.595598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.595626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.595646 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.698425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.698487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.698503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.698527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.698544 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.801525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.801579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.801593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.801612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.801625 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.904759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.904821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.904843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.904875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:16 crc kubenswrapper[4768]: I1124 16:53:16.904898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:16Z","lastTransitionTime":"2025-11-24T16:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.007644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.007701 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.007722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.007746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.007763 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.110587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.110643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.110657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.110676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.110690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.213995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.214052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.214063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.214082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.214094 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.317479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.317569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.317595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.317652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.317673 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.421323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.421449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.421467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.421496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.421514 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.524663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.524732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.524742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.524756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.524764 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.580016 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.580190 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.627731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.627798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.627810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.627828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.627838 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.635028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.635077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.635124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.635140 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.635153 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.651000 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:17Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.656462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.656528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.656543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.656567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.656581 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.670626 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:17Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.674815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.674870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.674881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.674899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.674911 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.688733 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:17Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.692538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.692588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.692598 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.692616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.692626 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.707766 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:17Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.711396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.711424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.711433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.711443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.711451 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.726447 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:17Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:17 crc kubenswrapper[4768]: E1124 16:53:17.726550 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.730983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.731033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.731053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.731077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.731093 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.834321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.834381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.834392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.834408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.834417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.936986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.937043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.937057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.937078 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:17 crc kubenswrapper[4768]: I1124 16:53:17.937093 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:17Z","lastTransitionTime":"2025-11-24T16:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.040019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.040070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.040082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.040105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.040119 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.142682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.142719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.142730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.142749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.142760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.245387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.245447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.245466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.245490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.245507 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.348535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.348604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.348622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.348649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.348668 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.451775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.451830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.451847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.451871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.451891 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.554594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.554660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.554679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.554706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.554730 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.580039 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.580144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.580197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:18 crc kubenswrapper[4768]: E1124 16:53:18.580377 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:18 crc kubenswrapper[4768]: E1124 16:53:18.580556 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:18 crc kubenswrapper[4768]: E1124 16:53:18.580686 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.657254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.657290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.657301 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.657319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.657331 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.759560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.759611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.759630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.759659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.759684 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.863085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.863195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.863253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.863280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.863299 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.966480 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.966543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.966561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.966591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:18 crc kubenswrapper[4768]: I1124 16:53:18.966611 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:18Z","lastTransitionTime":"2025-11-24T16:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.070613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.070672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.070690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.070776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.070828 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.174399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.174495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.174520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.174554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.174576 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.277849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.277915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.277932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.277958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.277977 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.380966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.381031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.381044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.381065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.381080 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.484969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.485056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.485082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.485117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.485140 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.580094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:19 crc kubenswrapper[4768]: E1124 16:53:19.580414 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.586863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.586904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.586913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.586929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.586939 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.592999 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.602893 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.620231 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.633938 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.646179 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.660149 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.675016 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.688807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.688836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.688845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.688858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.688868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.689459 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.743865 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.758462 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.772161 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.784545 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.790862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.790906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.790916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.790931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.790944 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.797108 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.805944 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.816373 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.831803 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.844179 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.857930 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.874682 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:19Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.893497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.893532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.893544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.893562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.893575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.996075 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.996148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.996161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.996181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:19 crc kubenswrapper[4768]: I1124 16:53:19.996193 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:19Z","lastTransitionTime":"2025-11-24T16:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.098831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.098886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.098904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.098930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.098948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.201462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.201494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.201504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.201520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.201528 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.303757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.303802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.303812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.303829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.303841 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.406706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.407395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.407411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.407429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.407439 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.510154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.510203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.510220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.510245 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.510265 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.580257 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.580290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:20 crc kubenswrapper[4768]: E1124 16:53:20.580479 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.580433 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:20 crc kubenswrapper[4768]: E1124 16:53:20.580600 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:20 crc kubenswrapper[4768]: E1124 16:53:20.580732 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.613283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.613329 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.613339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.613368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.613381 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.715944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.715995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.716012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.716032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.716045 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.819596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.819653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.819665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.819682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.819697 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.922624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.922671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.922687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.922709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:20 crc kubenswrapper[4768]: I1124 16:53:20.922727 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:20Z","lastTransitionTime":"2025-11-24T16:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.024898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.024939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.024949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.024964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.024973 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.127151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.127191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.127204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.127218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.127228 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.231076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.231142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.231163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.231189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.231206 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.333857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.333932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.333956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.333987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.334011 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.437201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.437620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.437636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.437661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.437678 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.540297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.540386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.540406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.540433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.540450 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.580500 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:21 crc kubenswrapper[4768]: E1124 16:53:21.580684 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.643726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.643812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.643853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.643880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.643891 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.746628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.746697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.746711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.746729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.746743 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.850062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.850108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.850117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.850134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.850143 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.952266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.952383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.952404 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.952429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:21 crc kubenswrapper[4768]: I1124 16:53:21.952447 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:21Z","lastTransitionTime":"2025-11-24T16:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.055612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.055682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.055699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.055725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.055748 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.158319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.158373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.158385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.158403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.158414 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.260642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.260702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.260724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.260753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.260773 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.363306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.363409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.363565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.363602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.363623 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.466588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.466652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.466670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.466700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.466722 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.570050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.570127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.570144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.570173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.570191 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.579812 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.579907 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.580395 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:22 crc kubenswrapper[4768]: E1124 16:53:22.580541 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:22 crc kubenswrapper[4768]: E1124 16:53:22.580943 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.581017 4768 scope.go:117] "RemoveContainer" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" Nov 24 16:53:22 crc kubenswrapper[4768]: E1124 16:53:22.581097 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.673785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.673865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.673889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.673924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.673951 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.777902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.777961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.777987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.778016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.778034 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.880933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.880998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.881018 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.881047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.881067 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.984909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.984996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.985017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.985051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:22 crc kubenswrapper[4768]: I1124 16:53:22.985070 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:22Z","lastTransitionTime":"2025-11-24T16:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.088247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.088310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.088331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.088392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.088413 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.110654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/2.log" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.113855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.115142 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.136755 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.155798 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.190498 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.191081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.191117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.191128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.191147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.191158 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.214214 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.236161 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.247606 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a0d5baf-1004-4b15-8490-d38e769be8ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cc8d1811c588c8c1f29240c5ecb01aa846858f1f56f9d6ee795d43da15aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.262314 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.273828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.288486 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.293139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.293206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.293220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.293241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.293254 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.304859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.316714 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.341928 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.357864 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.377073 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.395906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.395941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.395950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.395963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.395974 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.399089 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.413848 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.429642 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.441554 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.461617 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:23Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.498599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.498637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.498646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.498663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.498676 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.580507 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:23 crc kubenswrapper[4768]: E1124 16:53:23.580710 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.600512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.600564 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.600585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.600605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.600620 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.703284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.703316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.703325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.703340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.703368 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.806163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.806209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.806221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.806240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.806255 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.908784 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.908834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.908850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.908875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:23 crc kubenswrapper[4768]: I1124 16:53:23.908890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:23Z","lastTransitionTime":"2025-11-24T16:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.011978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.012014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.012024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.012038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.012048 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.115518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.115836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.115925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.116042 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.116145 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.118799 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/3.log" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.119724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/2.log" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.122589 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" exitCode=1 Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.122741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.122901 4768 scope.go:117] "RemoveContainer" containerID="23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.123735 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.124006 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.142452 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.171604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23e3df12b264d3990ebb888ba0eee27d77e0f09dd59a38b001f67be0234bdc3d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:52:55Z\\\",\\\"message\\\":\\\"/factory.go:140\\\\nI1124 16:52:55.648744 6503 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:52:55.648836 6503 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:52:55.648865 6503 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.648838 6503 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 16:52:55.649226 6503 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 16:52:55.649242 6503 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 16:52:55.649289 6503 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 16:52:55.649305 6503 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 16:52:55.649321 6503 factory.go:656] Stopping watch factory\\\\nI1124 16:52:55.649336 6503 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 16:52:55.649428 6503 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:23Z\\\",\\\"message\\\":\\\"16:53:23.524423 6859 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524623 6859 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524751 6859 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525074 6859 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525243 6859 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525840 6859 factory.go:656] Stopping watch factory\\\\nI1124 16:53:23.558949 6859 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 16:53:23.558986 6859 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 16:53:23.559098 6859 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:53:23.559136 6859 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:53:23.559224 6859 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.185859 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.199006 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.213413 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.218505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.218541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.218551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.218566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.218577 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.229153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.240266 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.255791 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.268430 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.283260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.299443 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.314974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.320265 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.320294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.320305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.320324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.320335 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.328453 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.342010 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a0d5baf-1004-4b15-8490-d38e769be8ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cc8d1811c588c8c1f29240c5ecb01aa846858f1f56f9d6ee795d43da15aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.355921 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.364611 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.364903 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.364860137 +0000 UTC m=+149.611828825 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.364988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.365046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.365127 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.365192 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.365235 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.365214747 +0000 UTC m=+149.612183435 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.365263 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.365250488 +0000 UTC m=+149.612219176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.371115 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.387019 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.402744 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.422743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.422786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.422795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.422811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.422823 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.423930 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:24Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.466615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.466772 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.466963 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467030 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467055 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467122 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.467100635 +0000 UTC m=+149.714069323 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.466962 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467199 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467229 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.467395 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.467318532 +0000 UTC m=+149.714287350 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.526288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.526376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.526392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.526417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.526430 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.580394 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.580456 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.580394 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.580594 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.580690 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:24 crc kubenswrapper[4768]: E1124 16:53:24.580788 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.630294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.630400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.630441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.630468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.630487 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.733973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.734043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.734062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.734090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.734114 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.837914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.838024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.838043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.838073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.838097 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.940735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.940821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.940841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.940870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:24 crc kubenswrapper[4768]: I1124 16:53:24.940890 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:24Z","lastTransitionTime":"2025-11-24T16:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.043781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.043834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.043851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.043877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.043894 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.130146 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/3.log" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.135947 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 16:53:25 crc kubenswrapper[4768]: E1124 16:53:25.136335 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.146250 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.146307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.146326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.146374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.146396 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.151898 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.164343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.190807 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.209484 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.228221 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a0d5baf-1004-4b15-8490-d38e769be8ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cc8d1811c588c8c1f29240c5ecb01aa846858f1f56f9d6ee795d43da15aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.249704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.249784 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.249815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.249849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.249875 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.251202 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.271490 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.295329 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.316601 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352205 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.352548 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.373172 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.402281 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:23Z\\\",\\\"message\\\":\\\"16:53:23.524423 6859 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524623 6859 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524751 6859 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525074 6859 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525243 6859 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525840 6859 factory.go:656] Stopping watch factory\\\\nI1124 16:53:23.558949 6859 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 16:53:23.558986 6859 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 16:53:23.559098 6859 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:53:23.559136 6859 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:53:23.559224 6859 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.420716 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.433568 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.448582 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.456370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.456439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.456458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.456484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.456504 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.470223 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.486902 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.505320 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.525619 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:25Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.559222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.559279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.559290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.559306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.559315 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.580803 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:25 crc kubenswrapper[4768]: E1124 16:53:25.581181 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.662610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.662671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.662690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.662714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.662736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.765700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.765764 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.765791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.765822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.765847 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.868509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.868555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.868571 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.868595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.868616 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.971721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.971780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.971797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.971822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:25 crc kubenswrapper[4768]: I1124 16:53:25.971840 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:25Z","lastTransitionTime":"2025-11-24T16:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.075872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.076248 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.076272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.076299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.076321 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.179396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.179447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.179464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.179485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.179502 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.283287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.283411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.283438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.283488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.283512 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.388130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.388198 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.388215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.388244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.388262 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.491891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.491981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.492001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.492033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.492058 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.580103 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.580166 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.580124 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:26 crc kubenswrapper[4768]: E1124 16:53:26.580410 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:26 crc kubenswrapper[4768]: E1124 16:53:26.580580 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:26 crc kubenswrapper[4768]: E1124 16:53:26.580809 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.597009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.597099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.597125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.597161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.597187 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.700720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.700769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.700787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.700814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.700834 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.803551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.803632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.803652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.803682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.803700 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.906407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.906464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.906482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.906511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:26 crc kubenswrapper[4768]: I1124 16:53:26.906537 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:26Z","lastTransitionTime":"2025-11-24T16:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.009151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.009217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.009240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.009268 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.009286 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.113190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.113253 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.113279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.113310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.113336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.216414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.216489 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.216513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.216550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.216575 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.319188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.319243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.319262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.319287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.319305 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.422203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.422276 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.422297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.422327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.422369 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.524914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.524975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.524992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.525019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.525036 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.580685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:27 crc kubenswrapper[4768]: E1124 16:53:27.580986 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.628565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.628627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.628645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.628669 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.628687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.732434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.732502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.732521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.732549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.732568 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.835440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.835514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.835540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.835570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.835593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.938677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.938728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.938740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.938761 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.938773 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.992855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.992894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.992904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.992920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:27 crc kubenswrapper[4768]: I1124 16:53:27.992932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:27Z","lastTransitionTime":"2025-11-24T16:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.004444 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.008002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.008047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.008066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.008090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.008107 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.020227 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.024752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.024813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.024821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.024837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.024852 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.044163 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.048461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.048524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.048543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.048569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.048596 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.067143 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.071617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.071675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.071688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.071709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.071725 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.090902 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:28Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.091022 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.092682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.092732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.092745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.092766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.092778 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.195398 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.195461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.195483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.195510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.195534 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.298169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.298207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.298216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.298231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.298239 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.401576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.401606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.401614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.401626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.401635 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.503735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.503770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.503779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.503794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.503805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.579960 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.580569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.580588 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.580671 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.580865 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:28 crc kubenswrapper[4768]: E1124 16:53:28.580969 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.606605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.606664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.606681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.606705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.606723 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.709586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.709633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.709645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.709702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.709717 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.812892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.813004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.813032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.813118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.813150 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.916303 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.916377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.916394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.916419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:28 crc kubenswrapper[4768]: I1124 16:53:28.916436 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:28Z","lastTransitionTime":"2025-11-24T16:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.019315 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.019396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.019414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.019441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.019460 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.121995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.122053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.122063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.122079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.122089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.224692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.224744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.224754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.224769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.224781 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.327801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.327848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.327859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.327880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.327896 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.429840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.429880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.429889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.429904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.429913 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.532240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.532279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.532287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.532300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.532309 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.580324 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:29 crc kubenswrapper[4768]: E1124 16:53:29.580781 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.593766 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a0d5baf-1004-4b15-8490-d38e769be8ba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49cc8d1811c588c8c1f29240c5ecb01aa846858f1f56f9d6ee795d43da15aff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc31b12ace7a77709b3ff576b42a37e3e4d436562f5db7eebd81f9ae23b74ac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.608732 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e1c1911-5095-4001-b4ed-6e24bdc4494b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e7929643aac3ae4dc022b7e409c8f89c0f7ec08ae3e8d4bf5ecce5dc8a4e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c8d426cc55c53371cc58d515387105b24a657c0200b85f1e7a5e6a48d48039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be7807b8249668d62be792c1dedc8d1271a2c51c872706f2b050ebfc292f24b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda3d09a7bd3d717900b61eb723790a947fa41f5ba8158f846b732032850de11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://906e23b8d2ac4d7470c9e871479ea616f83e4ecb23ba3b12fb619ebf60552230\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"le observer\\\\nW1124 16:52:19.992798 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1124 16:52:19.992983 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 16:52:19.993691 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2910422761/tls.crt::/tmp/serving-cert-2910422761/tls.key\\\\\\\"\\\\nI1124 16:52:20.666633 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1124 16:52:20.672130 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1124 16:52:20.672147 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1124 16:52:20.672170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1124 16:52:20.672175 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1124 16:52:20.677175 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1124 16:52:20.677157 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1124 16:52:20.677206 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677210 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1124 16:52:20.677215 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1124 16:52:20.677219 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1124 16:52:20.677222 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1124 16:52:20.677224 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1124 16:52:20.679034 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ba54bdb26bac38ca69286c71d935503a3e35a50586e1c55d4f881fda0395cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a490736f553445d0c0f14ef9dba60b857efd3b9e1bd50e21b131aab82b5d4a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.621066 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d23d704e-96c9-4e48-8f80-0761fb1d07e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cefe8fbd1321d8e391d341491eff1a583f56e4ef09d1ba71da4d8c84a826185\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78b0efb1b7f2aad144c24537d9304024680adc1946d26a91c03dcf4c59ac4dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8e98467b337c1b1625211569f5df1ad40d100d3243c5358dc61c73327cf0af2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8555d94a92140187102eee6a7792882d49b3552c9f37d51e511a6e20faa9657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.634234 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.635232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.635286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.635299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.635319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.635332 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.657474 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c5817b-8ca1-4d97-8a2f-0ffc8e9a1006\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13bc2be869a4a801d6f03a59dc56347354fcd865bd8cbd5a4a74f7b4c38b42db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd880192ff92c54374d065e354c82e195bb88b7b47e6eb6aa9934c0fa87c07dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db5dfa44d7525fe125a601eeebf90efbcf6fe52971621cb5124f010b1c7f051\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a296f92c39e7f02edf94a866a1c3c3dcb7f6727ce2ab34d66c4612d37e7a31ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://617af95160f9aedc3e738aeea2e8fb41db12d08164d24f77e5623f38a94e95a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f56afb1e80543b16246244f51a1ad6a36a753140375b8ad6225b197ab3218f11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fff3d18edc0e0bc76ea836bb42e4a7fcca1547da0f1f671b98b2ad643d4df3ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqgl7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c6hmx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.673044 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-275xl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff18637c-91e0-4ea4-9f9a-53c5b0277927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-69lz7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-275xl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.703643 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f4b2974-3ee3-4560-9962-a93a8e5dceea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c86d264d70d4402efad9e9876824ba47f1eb05bfe26421d1a30e0d7f3fa30ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c0c54649f0eed44bdfda31c61d110b2067475ff24babaea708380763ee5e7ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c5bae62222b85ee6bdc550960772da710e3581d0bc58e9a44b9b243bbab4cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17140e48e1e363af060073cce89fc9f803547fc639f9040ed6c080460b24b248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9250cd531e2b9f2868005b8d05cd861e17d17e0e8c42d47ec3b1121792cee74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97b2ccf7a67efb25819dc1200d86aedd2313a4582b5b376c9241267c573d5515\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c527facf45786f7bc14eee27ba1b1afd96b203e020d851fa707e1e50626c68d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f7ff4a87ecd2ff2e9e37040d0cc870eea58fbebc718f1fe3a4fa58da7a53dac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.722394 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dec0e32881a33b3176a74f8f5f781d2a2f36238a9d15ae01821cdd9b8ab85daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738252 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.738751 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.755627 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-k8vfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:10Z\\\",\\\"message\\\":\\\"2025-11-24T16:52:24+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511\\\\n2025-11-24T16:52:24+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c8a5e4c-9ef4-4ab1-bb5e-af7053293511 to /host/opt/cni/bin/\\\\n2025-11-24T16:52:24Z [verbose] multus-daemon started\\\\n2025-11-24T16:52:24Z [verbose] Readiness Indicator file check\\\\n2025-11-24T16:53:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f4twm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-k8vfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.772660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d5c3c1e-571c-4b97-8d0c-63a6c0c126d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c17685b6f60ddfe6df8a9e2101c8261b61a004fe711173e41d189807791444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6511b48fa7f312490f11e2e4851845631f91f5e9b75073f49f938f237c9b0aa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cdhwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9wdz4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.786232 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4464abf61e377bb472235b10ec7ca96fb5d0c2797db922595d389bfc6e10ecbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e933e6f4eb6dd334c8fe1baafac3ad2e47e53bb9e5376f672cf889e8bbbd3ab9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.805693 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wlblb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ae5decc-7de7-41db-9adf-b5551322c43a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://158b86276d2d54b6ecc7c6afd5ad032e74047a4b7b2f63d28797277f85509a7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mgwp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wlblb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.834289 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T16:53:23Z\\\",\\\"message\\\":\\\"16:53:23.524423 6859 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524623 6859 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 16:53:23.524751 6859 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525074 6859 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525243 6859 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 16:53:23.525840 6859 factory.go:656] Stopping watch factory\\\\nI1124 16:53:23.558949 6859 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1124 16:53:23.558986 6859 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1124 16:53:23.559098 6859 ovnkube.go:599] Stopped ovnkube\\\\nI1124 16:53:23.559136 6859 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1124 16:53:23.559224 6859 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T16:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fdzd7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-98lk9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.840235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.840277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.840286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.840302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.840313 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.849400 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfca098-b23b-4919-846e-a2bec70c3194\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:51:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a84233770bc480bffafba4cbc89f1b4565d8b3344f3ed13a70ded607f3a15e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a999a0120b8a06909b66b0dbec840b9da545c4f3bbd456fd1b43813424399275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aa208656bca26bcfb6fed52d2e95f4215a696a13caac18fa7116d7283626e5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82ac46bcc1ce29743ecf0729614e8acf719c71cc324c14f2e5a60a5b709fb38f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:51:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.862711 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.876645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db68f33f82099eb12c707857b0900c65ea0b14796b3e819935cf1a5d267c131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.888149 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"517d8128-bef5-40a3-a786-5010780c2a58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f75c6ecabfbb108102e29fc2123b006e2b00e722e312fa051b238e54a0a129ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ts5s5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jf255\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.898917 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ql7kf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"630572ea-dec9-406a-9cca-da5ad59952b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T16:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa62b78c6319c410fe9f6e87f386725b55358094219c390953cc8e6d08dcdf77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T16:52:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqmp5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T16:52:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ql7kf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:29Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.942121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.942176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.942193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.942216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:29 crc kubenswrapper[4768]: I1124 16:53:29.942233 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:29Z","lastTransitionTime":"2025-11-24T16:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.044129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.044185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.044200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.044218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.044229 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.146122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.146188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.146201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.146222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.146237 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.248430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.248481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.248492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.248512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.248524 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.350774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.350834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.350847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.350864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.350876 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.453697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.453749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.453760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.453777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.453791 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.556467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.556540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.556558 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.556586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.556607 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.579777 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.579883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.579930 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:30 crc kubenswrapper[4768]: E1124 16:53:30.580013 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:30 crc kubenswrapper[4768]: E1124 16:53:30.580087 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:30 crc kubenswrapper[4768]: E1124 16:53:30.580187 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.659572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.659617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.659627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.659642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.659654 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.762763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.762867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.762878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.762896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.762908 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.866429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.866488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.866504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.866529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.866547 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.969312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.969382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.969397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.969415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:30 crc kubenswrapper[4768]: I1124 16:53:30.969429 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:30Z","lastTransitionTime":"2025-11-24T16:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.072066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.072130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.072146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.072171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.072189 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.175886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.175957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.175980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.176011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.176056 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.278344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.278410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.278421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.278442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.278454 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.381541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.381911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.382116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.382389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.382562 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.486166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.486223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.486244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.486270 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.486289 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.580865 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:31 crc kubenswrapper[4768]: E1124 16:53:31.581093 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.588285 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.588319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.588330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.588370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.588383 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.691971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.692043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.692060 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.692114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.692131 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.794433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.794520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.794539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.794562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.794579 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.897596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.897742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.897768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.897797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:31 crc kubenswrapper[4768]: I1124 16:53:31.897821 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:31Z","lastTransitionTime":"2025-11-24T16:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.000733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.000823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.000862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.000898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.000923 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.103575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.103638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.103655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.103677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.103694 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.207098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.207175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.207193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.207218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.207235 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.310083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.310153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.310171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.310195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.310211 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.413186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.413269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.413294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.413325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.413383 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.515697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.515755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.515776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.515798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.515816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.580714 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.580779 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.580882 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:32 crc kubenswrapper[4768]: E1124 16:53:32.580888 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:32 crc kubenswrapper[4768]: E1124 16:53:32.581046 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:32 crc kubenswrapper[4768]: E1124 16:53:32.581157 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.618630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.618700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.618717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.618739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.618756 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.721973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.722044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.722063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.722095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.722113 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.824808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.824846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.824857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.824873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.824883 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.927639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.927703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.927714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.927731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:32 crc kubenswrapper[4768]: I1124 16:53:32.927741 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:32Z","lastTransitionTime":"2025-11-24T16:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.030079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.030137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.030146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.030179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.030195 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.133153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.133321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.133390 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.133426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.133448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.236339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.236439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.236463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.236493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.236515 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.341227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.341449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.341541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.341621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.341686 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.445568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.445610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.445622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.445641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.445656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.548482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.548554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.548568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.548587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.548601 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.580451 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:33 crc kubenswrapper[4768]: E1124 16:53:33.580653 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.651100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.651160 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.651171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.651192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.651209 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.754936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.755020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.755044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.755077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.755178 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.859069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.859138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.859159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.859187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.859204 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.962691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.962740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.962754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.962778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:33 crc kubenswrapper[4768]: I1124 16:53:33.962793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:33Z","lastTransitionTime":"2025-11-24T16:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.065900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.065969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.065986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.066016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.066035 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.168531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.168597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.168614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.168640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.168661 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.271872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.271924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.271934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.271974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.271985 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.374307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.374372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.374393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.374415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.374426 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.476739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.476776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.476787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.476801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.476810 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579859 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:34 crc kubenswrapper[4768]: E1124 16:53:34.579963 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579877 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.579874 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:34 crc kubenswrapper[4768]: E1124 16:53:34.580255 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:34 crc kubenswrapper[4768]: E1124 16:53:34.580389 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.683152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.683225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.683246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.683271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.683288 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.787183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.787255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.787279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.787310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.787332 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.890755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.891093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.891327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.891651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.891847 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.994863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.994932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.994968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.994998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:34 crc kubenswrapper[4768]: I1124 16:53:34.995024 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:34Z","lastTransitionTime":"2025-11-24T16:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.098755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.098819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.098838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.098862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.098879 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.201965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.202021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.202043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.202069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.202086 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.304989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.305046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.305067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.305093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.305114 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.407860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.407902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.407910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.407926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.407935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.510701 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.510762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.510780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.510808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.510826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.580438 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:35 crc kubenswrapper[4768]: E1124 16:53:35.580621 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.613879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.613929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.613939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.613957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.613985 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.716611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.716663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.716679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.716705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.716722 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.818569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.818622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.818639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.818662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.818679 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.926030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.926127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.926209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.926244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:35 crc kubenswrapper[4768]: I1124 16:53:35.926279 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:35Z","lastTransitionTime":"2025-11-24T16:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.029521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.029710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.029990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.030030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.030044 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.132843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.132909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.132928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.132953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.132971 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.236126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.236179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.236192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.236210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.236223 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.339672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.339753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.339767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.339792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.339806 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.443024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.443136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.443167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.443203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.443229 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.546873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.546933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.546946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.546968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.546981 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.580545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.580636 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.580586 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:36 crc kubenswrapper[4768]: E1124 16:53:36.580773 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:36 crc kubenswrapper[4768]: E1124 16:53:36.581151 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:36 crc kubenswrapper[4768]: E1124 16:53:36.581232 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.650459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.650532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.650551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.650579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.650599 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.754034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.754099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.754117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.754142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.754161 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.857867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.857940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.857957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.857983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.858000 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.961172 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.961259 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.961277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.961307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:36 crc kubenswrapper[4768]: I1124 16:53:36.961327 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:36Z","lastTransitionTime":"2025-11-24T16:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.064079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.064135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.064147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.064164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.064174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.167584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.167641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.167650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.167666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.167680 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.270916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.270976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.270990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.271010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.271023 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.374072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.374118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.374126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.374145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.374159 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.477576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.477644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.477670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.477702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.477723 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.579863 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:37 crc kubenswrapper[4768]: E1124 16:53:37.580090 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581160 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581440 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: E1124 16:53:37.581463 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.581484 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.683899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.683964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.683976 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.683995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.684008 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.786742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.786783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.786792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.786808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.786817 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.890747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.890788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.890797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.890812 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.890821 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.993454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.993500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.993511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.993529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:37 crc kubenswrapper[4768]: I1124 16:53:37.993542 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:37Z","lastTransitionTime":"2025-11-24T16:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.096786 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.096861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.096889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.096919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.096940 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.199491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.199543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.199554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.199574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.199588 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.302164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.302218 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.302229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.302247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.302261 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.405330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.405429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.405446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.405471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.405489 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.483194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.483258 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.483275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.483298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.483319 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.499309 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:38Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.504460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.504511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.504528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.504551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.504569 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.522472 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:38Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.526747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.526810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.526822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.526839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.526849 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.538683 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:38Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.542668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.542722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.542732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.542748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.542760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.557612 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:38Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.562552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.562614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.562632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.562659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.562678 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.576401 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T16:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"397c5980-9223-44c8-a77d-6f192e744f3c\\\",\\\"systemUUID\\\":\\\"7d12c74d-4c3d-45cf-9517-ea4f468abd63\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T16:53:38Z is after 2025-08-24T17:21:41Z" Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.576616 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.578204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.578264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.578286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.578515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.578533 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.580731 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.580852 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.581487 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.581718 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.582096 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:38 crc kubenswrapper[4768]: E1124 16:53:38.582227 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.681506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.681570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.681596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.681628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.681648 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.784327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.784396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.784410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.784426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.784438 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.887421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.887468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.887481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.887498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.887511 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.990496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.990561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.990579 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.990605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:38 crc kubenswrapper[4768]: I1124 16:53:38.990624 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:38Z","lastTransitionTime":"2025-11-24T16:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.094415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.094492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.094510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.094536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.094553 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.197267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.197322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.197333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.197359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.197368 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.300299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.300393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.300413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.300441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.300461 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.403499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.403537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.403547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.403561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.403572 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.507048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.507133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.507154 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.507185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.507206 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.580545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:39 crc kubenswrapper[4768]: E1124 16:53:39.580825 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.611955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.612066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.612086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.612115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.612136 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.618291 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c6hmx" podStartSLOduration=78.618270373 podStartE2EDuration="1m18.618270373s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.618212232 +0000 UTC m=+100.865180930" watchObservedRunningTime="2025-11-24 16:53:39.618270373 +0000 UTC m=+100.865239051" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.654779 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.654754417 podStartE2EDuration="20.654754417s" podCreationTimestamp="2025-11-24 16:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.653732267 +0000 UTC m=+100.900700935" watchObservedRunningTime="2025-11-24 16:53:39.654754417 +0000 UTC m=+100.901723095" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.694820 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.694792945 podStartE2EDuration="45.694792945s" podCreationTimestamp="2025-11-24 16:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.693295041 +0000 UTC m=+100.940263709" watchObservedRunningTime="2025-11-24 16:53:39.694792945 +0000 UTC m=+100.941761613" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.695132 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.695097184 podStartE2EDuration="1m18.695097184s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.672490439 +0000 UTC m=+100.919459177" watchObservedRunningTime="2025-11-24 16:53:39.695097184 +0000 UTC m=+100.942065862" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.715393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.715449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.715466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.715488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.715506 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.731537 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9wdz4" podStartSLOduration=77.731513035 podStartE2EDuration="1m17.731513035s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.731157555 +0000 UTC m=+100.978126233" watchObservedRunningTime="2025-11-24 16:53:39.731513035 +0000 UTC m=+100.978481703" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.756984 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.756968704 podStartE2EDuration="1m17.756968704s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.756547582 +0000 UTC m=+101.003516290" watchObservedRunningTime="2025-11-24 16:53:39.756968704 +0000 UTC m=+101.003937362" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.799461 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-k8vfj" podStartSLOduration=78.799447374 podStartE2EDuration="1m18.799447374s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.798651651 +0000 UTC m=+101.045620349" watchObservedRunningTime="2025-11-24 16:53:39.799447374 +0000 UTC m=+101.046416022" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.817774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.817816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.817827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.817843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.817855 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.831671 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wlblb" podStartSLOduration=78.831651042 podStartE2EDuration="1m18.831651042s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.831211059 +0000 UTC m=+101.078179737" watchObservedRunningTime="2025-11-24 16:53:39.831651042 +0000 UTC m=+101.078619700" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.879625 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ql7kf" podStartSLOduration=78.879605282 podStartE2EDuration="1m18.879605282s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.864836658 +0000 UTC m=+101.111805316" watchObservedRunningTime="2025-11-24 16:53:39.879605282 +0000 UTC m=+101.126573940" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.891593 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.891569864 podStartE2EDuration="1m19.891569864s" podCreationTimestamp="2025-11-24 16:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.879399616 +0000 UTC m=+101.126368284" watchObservedRunningTime="2025-11-24 16:53:39.891569864 +0000 UTC m=+101.138538532" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.915016 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podStartSLOduration=78.914991184 podStartE2EDuration="1m18.914991184s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:39.914114478 +0000 UTC m=+101.161083206" watchObservedRunningTime="2025-11-24 16:53:39.914991184 +0000 UTC m=+101.161959842" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.920563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.920610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.920621 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.920639 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:39 crc kubenswrapper[4768]: I1124 16:53:39.920652 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:39Z","lastTransitionTime":"2025-11-24T16:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.023657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.023699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.023712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.023731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.023741 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.125905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.125949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.125962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.125978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.125990 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.228726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.229152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.229170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.229194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.229213 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.236436 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:40 crc kubenswrapper[4768]: E1124 16:53:40.236625 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:53:40 crc kubenswrapper[4768]: E1124 16:53:40.236737 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs podName:ff18637c-91e0-4ea4-9f9a-53c5b0277927 nodeName:}" failed. No retries permitted until 2025-11-24 16:54:44.236709099 +0000 UTC m=+165.483677787 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs") pod "network-metrics-daemon-275xl" (UID: "ff18637c-91e0-4ea4-9f9a-53c5b0277927") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.332801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.332898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.332922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.332949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.332967 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.436839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.436912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.436933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.436962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.436985 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.539077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.539161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.539181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.539214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.539236 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.580699 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.580749 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.580841 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:40 crc kubenswrapper[4768]: E1124 16:53:40.580896 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:40 crc kubenswrapper[4768]: E1124 16:53:40.581118 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:40 crc kubenswrapper[4768]: E1124 16:53:40.581211 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.641809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.641891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.641917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.641951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.641971 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.744808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.744896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.744914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.744943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.744964 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.848241 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.848307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.848326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.848380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.848401 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.951541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.951618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.951636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.951665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:40 crc kubenswrapper[4768]: I1124 16:53:40.951685 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:40Z","lastTransitionTime":"2025-11-24T16:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.054696 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.054743 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.054756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.054775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.054788 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.157603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.157661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.157671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.157688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.157701 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.260849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.260891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.260901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.260919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.260932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.363963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.364008 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.364019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.364039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.364053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.467072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.467126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.467139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.467169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.467185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.569331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.569400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.569412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.569432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.569461 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.579998 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:41 crc kubenswrapper[4768]: E1124 16:53:41.580101 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.673061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.673130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.673149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.673177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.673197 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.775538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.775617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.775635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.775663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.775683 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.879138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.879201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.879224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.879254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.879272 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.982556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.982618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.982635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.982664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:41 crc kubenswrapper[4768]: I1124 16:53:41.982686 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:41Z","lastTransitionTime":"2025-11-24T16:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.085543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.085612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.085629 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.085659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.085678 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.188625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.188731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.188753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.188828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.188851 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.291978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.292453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.292686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.292915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.293139 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.396131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.396215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.396246 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.396280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.396306 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.499063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.499143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.499162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.499193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.499214 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.580410 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.580445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.580485 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:42 crc kubenswrapper[4768]: E1124 16:53:42.580626 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:42 crc kubenswrapper[4768]: E1124 16:53:42.580730 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:42 crc kubenswrapper[4768]: E1124 16:53:42.580893 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.602425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.602490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.602511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.602540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.602559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.705603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.705652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.705663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.705681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.705693 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.808921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.808999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.809022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.809053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.809077 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.911854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.911909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.911920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.911937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:42 crc kubenswrapper[4768]: I1124 16:53:42.911950 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:42Z","lastTransitionTime":"2025-11-24T16:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.014950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.014994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.015006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.015024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.015035 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.118032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.118073 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.118082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.118096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.118104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.220541 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.220648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.220661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.220698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.220709 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.323773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.323845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.323863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.323890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.323914 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.426277 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.426337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.426387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.426413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.426431 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.529052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.529090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.529099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.529114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.529123 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.580417 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:43 crc kubenswrapper[4768]: E1124 16:53:43.580649 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.631195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.631287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.631307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.631331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.631385 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.733610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.733682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.733705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.733737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.733763 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.836690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.836734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.836750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.836769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.836781 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.940107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.940189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.940208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.940233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:43 crc kubenswrapper[4768]: I1124 16:53:43.940253 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:43Z","lastTransitionTime":"2025-11-24T16:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.042544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.042616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.042642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.042673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.042695 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.145461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.145532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.145557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.145585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.145606 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.248591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.248650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.248661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.248678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.248690 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.351203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.351271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.351288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.351317 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.351336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.454698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.454788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.454815 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.454846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.454868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.557247 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.557331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.557376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.557399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.557417 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.580094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.580143 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.580108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:44 crc kubenswrapper[4768]: E1124 16:53:44.580266 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:44 crc kubenswrapper[4768]: E1124 16:53:44.580382 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:44 crc kubenswrapper[4768]: E1124 16:53:44.580499 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.660306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.660385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.660406 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.660431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.660448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.762779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.762835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.762852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.762877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.762895 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.866020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.866085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.866112 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.866143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.866167 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.969716 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.969899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.969930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.969961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:44 crc kubenswrapper[4768]: I1124 16:53:44.969981 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:44Z","lastTransitionTime":"2025-11-24T16:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.073652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.073720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.073739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.073769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.073794 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.177164 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.177296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.177324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.177388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.177426 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.280423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.280487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.280505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.280529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.280546 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.383971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.384036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.384055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.384084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.384104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.487512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.487565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.487584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.487610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.487627 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.579844 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:45 crc kubenswrapper[4768]: E1124 16:53:45.580059 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.590458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.590535 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.590554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.590588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.590608 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.693345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.693434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.693457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.693484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.693504 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.796426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.796478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.796501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.796525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.796546 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.899079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.899132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.899147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.899166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:45 crc kubenswrapper[4768]: I1124 16:53:45.899179 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:45Z","lastTransitionTime":"2025-11-24T16:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.002230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.002305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.002318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.002338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.002370 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.105200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.105284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.105308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.105391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.105419 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.208065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.208117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.208135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.208162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.208181 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.311017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.311081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.311100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.311124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.311143 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.413939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.413996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.414013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.414038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.414057 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.517586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.517650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.517672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.517706 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.517728 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.580651 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.580743 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.580770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:46 crc kubenswrapper[4768]: E1124 16:53:46.580834 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:46 crc kubenswrapper[4768]: E1124 16:53:46.580916 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:46 crc kubenswrapper[4768]: E1124 16:53:46.581133 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.620860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.620916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.620934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.620957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.620980 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.723419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.723485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.723503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.723532 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.723552 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.825879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.825930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.825946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.825967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.825983 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.929111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.929174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.929186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.929207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:46 crc kubenswrapper[4768]: I1124 16:53:46.929219 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:46Z","lastTransitionTime":"2025-11-24T16:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.031596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.031637 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.031647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.031662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.031671 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.135056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.135097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.135105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.135121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.135130 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.237316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.237395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.237408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.237425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.237437 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.339689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.339731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.339741 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.339756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.339765 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.441882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.441953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.441972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.441999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.442017 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.545051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.545134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.545156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.545182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.545201 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.581015 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:47 crc kubenswrapper[4768]: E1124 16:53:47.581251 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.648067 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.648150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.648175 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.648202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.648249 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.753219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.753294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.753313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.753339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.753386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.856149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.856203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.856220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.856244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.856260 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.960061 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.960113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.960124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.960144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:47 crc kubenswrapper[4768]: I1124 16:53:47.960156 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:47Z","lastTransitionTime":"2025-11-24T16:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.062095 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.062131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.062141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.062158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.062172 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.165333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.165423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.165448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.165478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.165500 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.268142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.268232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.268255 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.268286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.268309 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.371031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.371101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.371112 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.371129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.371142 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.474082 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.474149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.474171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.474203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.474226 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.577098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.577163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.577181 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.577206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.577223 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.580509 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.580594 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.580625 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:48 crc kubenswrapper[4768]: E1124 16:53:48.580790 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:48 crc kubenswrapper[4768]: E1124 16:53:48.580941 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:48 crc kubenswrapper[4768]: E1124 16:53:48.581033 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.679886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.680017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.680097 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.680132 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.680169 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.784019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.784094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.784115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.784146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.784163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.886828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.886876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.886887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.886903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.886914 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.924459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.924520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.924539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.924569 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.924588 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T16:53:48Z","lastTransitionTime":"2025-11-24T16:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.988911 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx"] Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.989702 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.992641 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.992642 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.993402 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 16:53:48 crc kubenswrapper[4768]: I1124 16:53:48.994800 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.038427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.038809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.039047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.039275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.039556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141115 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141257 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141330 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.141420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.142767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-service-ca\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.149559 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.166389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-62ppx\" (UID: \"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.311278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" Nov 24 16:53:49 crc kubenswrapper[4768]: W1124 16:53:49.328871 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cfa0a2f_8c0b_4fc4_9212_48bf3874c7ee.slice/crio-a8f4cb01d487df98acac5af17ec94a93cd929819b53695a1d8c716a637bcacfa WatchSource:0}: Error finding container a8f4cb01d487df98acac5af17ec94a93cd929819b53695a1d8c716a637bcacfa: Status 404 returned error can't find the container with id a8f4cb01d487df98acac5af17ec94a93cd929819b53695a1d8c716a637bcacfa Nov 24 16:53:49 crc kubenswrapper[4768]: I1124 16:53:49.580786 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:49 crc kubenswrapper[4768]: E1124 16:53:49.581892 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.222656 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" event={"ID":"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee","Type":"ContainerStarted","Data":"e7f35e9d5d691a1fd60425678512d4b5db166586bc7da102a90a5227e7da1943"} Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.222733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" event={"ID":"4cfa0a2f-8c0b-4fc4-9212-48bf3874c7ee","Type":"ContainerStarted","Data":"a8f4cb01d487df98acac5af17ec94a93cd929819b53695a1d8c716a637bcacfa"} Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.579925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.579941 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:50 crc kubenswrapper[4768]: E1124 16:53:50.580276 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.580473 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 16:53:50 crc kubenswrapper[4768]: E1124 16:53:50.580489 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:50 crc kubenswrapper[4768]: I1124 16:53:50.580528 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:50 crc kubenswrapper[4768]: E1124 16:53:50.580675 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-98lk9_openshift-ovn-kubernetes(17a83d5e-e5e7-422d-ab0e-647ca2eefb37)\"" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" Nov 24 16:53:50 crc kubenswrapper[4768]: E1124 16:53:50.580759 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:51 crc kubenswrapper[4768]: I1124 16:53:51.580190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:51 crc kubenswrapper[4768]: E1124 16:53:51.580468 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:52 crc kubenswrapper[4768]: I1124 16:53:52.579789 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:52 crc kubenswrapper[4768]: I1124 16:53:52.579789 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:52 crc kubenswrapper[4768]: I1124 16:53:52.579818 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:52 crc kubenswrapper[4768]: E1124 16:53:52.580138 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:52 crc kubenswrapper[4768]: E1124 16:53:52.580232 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:52 crc kubenswrapper[4768]: E1124 16:53:52.579960 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:53 crc kubenswrapper[4768]: I1124 16:53:53.580821 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:53 crc kubenswrapper[4768]: E1124 16:53:53.581044 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:54 crc kubenswrapper[4768]: I1124 16:53:54.580572 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:54 crc kubenswrapper[4768]: E1124 16:53:54.581366 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:54 crc kubenswrapper[4768]: I1124 16:53:54.581564 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:54 crc kubenswrapper[4768]: I1124 16:53:54.582278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:54 crc kubenswrapper[4768]: E1124 16:53:54.582412 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:54 crc kubenswrapper[4768]: E1124 16:53:54.582535 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:55 crc kubenswrapper[4768]: I1124 16:53:55.580538 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:55 crc kubenswrapper[4768]: E1124 16:53:55.581057 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.245396 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/1.log" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.245998 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/0.log" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.246043 4768 generic.go:334] "Generic (PLEG): container finished" podID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" containerID="e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35" exitCode=1 Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.246083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerDied","Data":"e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35"} Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.246125 4768 scope.go:117] "RemoveContainer" containerID="4dd0a64e9d8be089e7be694a81c70af272299a51838098aacc6dae779a4d8db5" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.246563 4768 scope.go:117] "RemoveContainer" containerID="e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35" Nov 24 16:53:56 crc kubenswrapper[4768]: E1124 16:53:56.246768 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-k8vfj_openshift-multus(b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a)\"" pod="openshift-multus/multus-k8vfj" podUID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.271710 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-62ppx" podStartSLOduration=95.271689902 podStartE2EDuration="1m35.271689902s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:53:50.245785747 +0000 UTC m=+111.492754415" watchObservedRunningTime="2025-11-24 16:53:56.271689902 +0000 UTC m=+117.518658580" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.580091 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.580181 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:56 crc kubenswrapper[4768]: I1124 16:53:56.580120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:56 crc kubenswrapper[4768]: E1124 16:53:56.580288 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:56 crc kubenswrapper[4768]: E1124 16:53:56.580412 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:56 crc kubenswrapper[4768]: E1124 16:53:56.580535 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:57 crc kubenswrapper[4768]: I1124 16:53:57.251875 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/1.log" Nov 24 16:53:57 crc kubenswrapper[4768]: I1124 16:53:57.580099 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:57 crc kubenswrapper[4768]: E1124 16:53:57.580230 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:58 crc kubenswrapper[4768]: I1124 16:53:58.579762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:53:58 crc kubenswrapper[4768]: I1124 16:53:58.579860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:53:58 crc kubenswrapper[4768]: E1124 16:53:58.579903 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:53:58 crc kubenswrapper[4768]: I1124 16:53:58.579933 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:53:58 crc kubenswrapper[4768]: E1124 16:53:58.580056 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:53:58 crc kubenswrapper[4768]: E1124 16:53:58.580493 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:53:59 crc kubenswrapper[4768]: I1124 16:53:59.580592 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:53:59 crc kubenswrapper[4768]: E1124 16:53:59.582432 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:53:59 crc kubenswrapper[4768]: E1124 16:53:59.596990 4768 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 16:53:59 crc kubenswrapper[4768]: E1124 16:53:59.682500 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 16:54:00 crc kubenswrapper[4768]: I1124 16:54:00.579949 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:00 crc kubenswrapper[4768]: I1124 16:54:00.580037 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:00 crc kubenswrapper[4768]: E1124 16:54:00.580085 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:00 crc kubenswrapper[4768]: E1124 16:54:00.580171 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:00 crc kubenswrapper[4768]: I1124 16:54:00.579949 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:00 crc kubenswrapper[4768]: E1124 16:54:00.580891 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:01 crc kubenswrapper[4768]: I1124 16:54:01.580378 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:01 crc kubenswrapper[4768]: E1124 16:54:01.580615 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:02 crc kubenswrapper[4768]: I1124 16:54:02.579758 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:02 crc kubenswrapper[4768]: I1124 16:54:02.579758 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:02 crc kubenswrapper[4768]: E1124 16:54:02.579908 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:02 crc kubenswrapper[4768]: I1124 16:54:02.579783 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:02 crc kubenswrapper[4768]: E1124 16:54:02.580108 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:02 crc kubenswrapper[4768]: E1124 16:54:02.580164 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:03 crc kubenswrapper[4768]: I1124 16:54:03.580478 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:03 crc kubenswrapper[4768]: E1124 16:54:03.580627 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:04 crc kubenswrapper[4768]: I1124 16:54:04.580903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:04 crc kubenswrapper[4768]: I1124 16:54:04.580969 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:04 crc kubenswrapper[4768]: E1124 16:54:04.581124 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:04 crc kubenswrapper[4768]: I1124 16:54:04.581673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:04 crc kubenswrapper[4768]: E1124 16:54:04.581828 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:04 crc kubenswrapper[4768]: E1124 16:54:04.582009 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:04 crc kubenswrapper[4768]: I1124 16:54:04.582271 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 16:54:04 crc kubenswrapper[4768]: E1124 16:54:04.684003 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.282220 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/3.log" Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.286728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerStarted","Data":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.287701 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.323707 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podStartSLOduration=104.323681539 podStartE2EDuration="1m44.323681539s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:05.32236891 +0000 UTC m=+126.569337588" watchObservedRunningTime="2025-11-24 16:54:05.323681539 +0000 UTC m=+126.570650227" Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.516060 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-275xl"] Nov 24 16:54:05 crc kubenswrapper[4768]: I1124 16:54:05.516191 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:05 crc kubenswrapper[4768]: E1124 16:54:05.516284 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:06 crc kubenswrapper[4768]: I1124 16:54:06.580645 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:06 crc kubenswrapper[4768]: I1124 16:54:06.580672 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:06 crc kubenswrapper[4768]: I1124 16:54:06.580794 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:06 crc kubenswrapper[4768]: E1124 16:54:06.580935 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:06 crc kubenswrapper[4768]: E1124 16:54:06.581255 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:06 crc kubenswrapper[4768]: E1124 16:54:06.581548 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:07 crc kubenswrapper[4768]: I1124 16:54:07.580516 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:07 crc kubenswrapper[4768]: E1124 16:54:07.580789 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:08 crc kubenswrapper[4768]: I1124 16:54:08.580558 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:08 crc kubenswrapper[4768]: I1124 16:54:08.580643 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:08 crc kubenswrapper[4768]: E1124 16:54:08.580753 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:08 crc kubenswrapper[4768]: I1124 16:54:08.580769 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:08 crc kubenswrapper[4768]: E1124 16:54:08.580924 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:08 crc kubenswrapper[4768]: E1124 16:54:08.581239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:08 crc kubenswrapper[4768]: I1124 16:54:08.581381 4768 scope.go:117] "RemoveContainer" containerID="e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35" Nov 24 16:54:09 crc kubenswrapper[4768]: I1124 16:54:09.306774 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/1.log" Nov 24 16:54:09 crc kubenswrapper[4768]: I1124 16:54:09.307231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerStarted","Data":"2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941"} Nov 24 16:54:09 crc kubenswrapper[4768]: I1124 16:54:09.580676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:09 crc kubenswrapper[4768]: E1124 16:54:09.583326 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:09 crc kubenswrapper[4768]: E1124 16:54:09.685039 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 16:54:10 crc kubenswrapper[4768]: I1124 16:54:10.579986 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:10 crc kubenswrapper[4768]: I1124 16:54:10.580083 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:10 crc kubenswrapper[4768]: E1124 16:54:10.580197 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:10 crc kubenswrapper[4768]: E1124 16:54:10.580395 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:10 crc kubenswrapper[4768]: I1124 16:54:10.580732 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:10 crc kubenswrapper[4768]: E1124 16:54:10.580975 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:11 crc kubenswrapper[4768]: I1124 16:54:11.580012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:11 crc kubenswrapper[4768]: E1124 16:54:11.580144 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:12 crc kubenswrapper[4768]: I1124 16:54:12.580070 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:12 crc kubenswrapper[4768]: I1124 16:54:12.580083 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:12 crc kubenswrapper[4768]: E1124 16:54:12.580255 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:12 crc kubenswrapper[4768]: I1124 16:54:12.580105 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:12 crc kubenswrapper[4768]: E1124 16:54:12.580457 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:12 crc kubenswrapper[4768]: E1124 16:54:12.580591 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:13 crc kubenswrapper[4768]: I1124 16:54:13.580481 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:13 crc kubenswrapper[4768]: E1124 16:54:13.580695 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-275xl" podUID="ff18637c-91e0-4ea4-9f9a-53c5b0277927" Nov 24 16:54:13 crc kubenswrapper[4768]: I1124 16:54:13.629473 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 16:54:14 crc kubenswrapper[4768]: I1124 16:54:14.580326 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:14 crc kubenswrapper[4768]: I1124 16:54:14.580433 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:14 crc kubenswrapper[4768]: I1124 16:54:14.580491 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:14 crc kubenswrapper[4768]: E1124 16:54:14.580547 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 16:54:14 crc kubenswrapper[4768]: E1124 16:54:14.580694 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 16:54:14 crc kubenswrapper[4768]: E1124 16:54:14.580810 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 16:54:15 crc kubenswrapper[4768]: I1124 16:54:15.580494 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:15 crc kubenswrapper[4768]: I1124 16:54:15.583500 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 16:54:15 crc kubenswrapper[4768]: I1124 16:54:15.583956 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.580154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.580241 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.580383 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.583483 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.583622 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.583650 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 16:54:16 crc kubenswrapper[4768]: I1124 16:54:16.585578 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.399659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.446121 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.447010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.449404 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.450205 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.452292 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.453100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.460194 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.460557 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.460692 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.461159 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.461518 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.461733 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.462091 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.462150 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.465760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466079 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466209 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466518 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466675 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466754 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.466873 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.467252 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.467987 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.469012 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.469781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.475075 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.475577 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.475932 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57xr4"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.486316 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gq6hn"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.494630 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.494656 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.494820 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.495292 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.500268 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.502076 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.502625 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.503292 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh9gq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.503491 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.503620 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.504056 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.504081 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.504335 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.504449 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.511760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.512048 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.512342 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.512605 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.512743 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.512959 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.513144 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.514420 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.515345 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.515608 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.516210 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.516808 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.516892 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517063 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517220 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517380 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517465 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517552 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517620 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517654 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517724 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517532 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.517925 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.518338 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mbvp9"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.518801 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.519320 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.522030 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.522255 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.523098 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525166 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525398 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525410 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525473 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525553 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525566 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525689 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525873 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.525918 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.526509 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.526820 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.527328 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.528053 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.528330 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.528599 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.528790 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.529072 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.529375 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.529628 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.539146 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.539724 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fvztb"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.540700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.541015 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wkrfg"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.541733 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.541862 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.542306 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.542711 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.543494 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.556371 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.557827 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.561047 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.561236 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.561404 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.562107 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.562506 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.564427 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.564707 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.564839 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.565919 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.568864 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.570309 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.577193 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.578467 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.578908 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.579043 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.579227 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.582183 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnhf2"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.582655 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-88z72"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.583025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.583061 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.583021 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.586980 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.588798 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.589262 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.589725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.589797 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.590052 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.590444 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.591455 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.591585 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.591688 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cp8w\" (UniqueName: \"kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.591786 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-serving-cert\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.591907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2k4\" (UniqueName: \"kubernetes.io/projected/b19a457f-0893-42b7-b7ac-f3b1446fbeac-kube-api-access-2d2k4\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b19a457f-0893-42b7-b7ac-f3b1446fbeac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-trusted-ca\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592292 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhh8p\" (UniqueName: \"kubernetes.io/projected/29755561-9db9-416d-b847-182fdb322ca5-kube-api-access-xhh8p\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-config\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b19a457f-0893-42b7-b7ac-f3b1446fbeac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.592995 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-encryption-config\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-policies\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593212 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c79p\" (UniqueName: \"kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593311 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29755561-9db9-416d-b847-182fdb322ca5-serving-cert\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593409 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-dir\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593482 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593558 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-client\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d9r5\" (UniqueName: \"kubernetes.io/projected/0418ca12-7159-4da5-8b9c-3a408822a00e-kube-api-access-9d9r5\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.593705 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.594674 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.597591 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.600326 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.600947 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.601386 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.601666 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.602796 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.602965 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.603734 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.604203 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.614272 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.615459 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.616096 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.616426 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.617944 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vdhkx"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.618371 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.618535 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.618622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.618789 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619074 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619124 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619488 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619632 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619725 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619777 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.619885 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.622418 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lcnvd"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.623145 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.623201 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.624949 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.626470 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.626966 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.627124 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.627422 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.640312 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.641478 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.641801 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.644569 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.644795 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.645001 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.645245 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.645395 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.645982 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.646585 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.646782 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.655738 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.661009 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.661094 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.667502 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.667680 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gq6hn"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.667713 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57xr4"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.675102 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26hqh"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.676477 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.676599 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.681800 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.682712 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.683010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.686551 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.690080 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700546 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868777b7-0ff7-4705-af3c-c453bb1418a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700586 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700610 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e21a9e0e-6de3-467a-b719-761919fd008c-machine-approver-tls\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-dir\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700716 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700735 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22ca0047-9042-4627-a34d-1fab214b831a-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700820 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-images\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700843 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88px\" (UniqueName: \"kubernetes.io/projected/b92e5626-f326-4da0-a2de-a10abaf78719-kube-api-access-f88px\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d77fa56-dcd9-464c-ae68-3f61838fd961-metrics-tls\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700900 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-client\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d9r5\" (UniqueName: \"kubernetes.io/projected/0418ca12-7159-4da5-8b9c-3a408822a00e-kube-api-access-9d9r5\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-config\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700978 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbc2\" (UniqueName: \"kubernetes.io/projected/e21a9e0e-6de3-467a-b719-761919fd008c-kube-api-access-csbc2\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701019 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701044 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701082 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701106 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701127 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-node-pullsecrets\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701157 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d55901-e472-477e-9a26-fea65fce74a5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701179 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nq7t\" (UniqueName: \"kubernetes.io/projected/0178dda3-3c96-409e-8dee-789ecec9a47f-kube-api-access-6nq7t\") pod \"downloads-7954f5f757-88z72\" (UID: \"0178dda3-3c96-409e-8dee-789ecec9a47f\") " pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701208 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cp8w\" (UniqueName: \"kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-serving-cert\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701257 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701280 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701311 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2k4\" (UniqueName: \"kubernetes.io/projected/b19a457f-0893-42b7-b7ac-f3b1446fbeac-kube-api-access-2d2k4\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701334 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-image-import-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-encryption-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701405 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701429 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701472 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701504 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit-dir\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701568 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkt4q\" (UniqueName: \"kubernetes.io/projected/eba3e0f2-2704-43dd-b433-3a26b5200e77-kube-api-access-xkt4q\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-trusted-ca\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhh8p\" (UniqueName: \"kubernetes.io/projected/29755561-9db9-416d-b847-182fdb322ca5-kube-api-access-xhh8p\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b19a457f-0893-42b7-b7ac-f3b1446fbeac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-serving-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85538bb7-9286-4a19-9009-89105dba2678-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.701907 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dln54\" (UniqueName: \"kubernetes.io/projected/622eb95d-1893-421b-890b-0fbd87dfa0b2-kube-api-access-dln54\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.702575 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.703635 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-trtbq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.703784 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704437 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-shbcz"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704491 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfm7\" (UniqueName: \"kubernetes.io/projected/868777b7-0ff7-4705-af3c-c453bb1418a3-kube-api-access-bkfm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704533 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6mtv\" (UniqueName: \"kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704670 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-config\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063d4b06-d385-4749-8394-14041350b8e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d55901-e472-477e-9a26-fea65fce74a5-serving-cert\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704858 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eba3e0f2-2704-43dd-b433-3a26b5200e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.705628 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.704944 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.705732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.700807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-dir\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622eb95d-1893-421b-890b-0fbd87dfa0b2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706362 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-serving-cert\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706403 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706423 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsmj\" (UniqueName: \"kubernetes.io/projected/814a2d48-7cb7-43bd-af05-951f0ccc9fc8-kube-api-access-mwsmj\") pod \"migrator-59844c95c7-vm425\" (UID: \"814a2d48-7cb7-43bd-af05-951f0ccc9fc8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706439 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e5626-f326-4da0-a2de-a10abaf78719-serving-cert\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706461 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706479 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4r9\" (UniqueName: \"kubernetes.io/projected/7d77fa56-dcd9-464c-ae68-3f61838fd961-kube-api-access-ft4r9\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-config\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvw4j\" (UniqueName: \"kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.706531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-auth-proxy-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.707318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-config\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.707862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ff067e9-1045-4aed-a5a3-1685140287c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.707910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf8cw\" (UniqueName: \"kubernetes.io/projected/47d55901-e472-477e-9a26-fea65fce74a5-kube-api-access-rf8cw\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.707963 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708018 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708465 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wkrfg"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29755561-9db9-416d-b847-182fdb322ca5-trusted-ca\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708921 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-client\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.708953 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709031 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/868777b7-0ff7-4705-af3c-c453bb1418a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709082 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfgnh\" (UniqueName: \"kubernetes.io/projected/5ff067e9-1045-4aed-a5a3-1685140287c5-kube-api-access-vfgnh\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/85538bb7-9286-4a19-9009-89105dba2678-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709480 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gk5b\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-kube-api-access-5gk5b\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709531 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b19a457f-0893-42b7-b7ac-f3b1446fbeac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-encryption-config\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.709913 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mbvp9"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710047 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b19a457f-0893-42b7-b7ac-f3b1446fbeac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710444 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710482 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ff067e9-1045-4aed-a5a3-1685140287c5-proxy-tls\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22ca0047-9042-4627-a34d-1fab214b831a-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710546 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710568 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-config\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdftv\" (UniqueName: \"kubernetes.io/projected/063d4b06-d385-4749-8394-14041350b8e9-kube-api-access-hdftv\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710626 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b19a457f-0893-42b7-b7ac-f3b1446fbeac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710718 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710763 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-policies\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710785 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w246\" (UniqueName: \"kubernetes.io/projected/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-kube-api-access-2w246\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710803 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29755561-9db9-416d-b847-182fdb322ca5-serving-cert\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710926 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-etcd-client\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5p48\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-kube-api-access-b5p48\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.710976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c79p\" (UniqueName: \"kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.711025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.711407 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0418ca12-7159-4da5-8b9c-3a408822a00e-audit-policies\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.711779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-serving-cert\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.712947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.714597 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-88z72"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.717028 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29755561-9db9-416d-b847-182fdb322ca5-serving-cert\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.717652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0418ca12-7159-4da5-8b9c-3a408822a00e-encryption-config\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.717840 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.719298 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.721127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.722225 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.721835 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.723339 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vdhkx"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.724962 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.726228 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.727476 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.728475 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.728605 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.732903 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh9gq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.734483 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.735283 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.736327 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.737461 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.738563 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.739771 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.740865 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.742030 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.743097 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnhf2"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.744289 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.745605 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.747130 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.748387 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.749073 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.749458 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-trtbq"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.750536 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.751607 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fvztb"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.752623 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cnfj2"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.753284 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.753623 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dfw5p"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.754775 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26hqh"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.754867 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.755936 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dfw5p"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.757256 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cnfj2"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.759450 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l"] Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.769753 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.789054 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.811924 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ff067e9-1045-4aed-a5a3-1685140287c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812038 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf8cw\" (UniqueName: \"kubernetes.io/projected/47d55901-e472-477e-9a26-fea65fce74a5-kube-api-access-rf8cw\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812174 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-client\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812338 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812378 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/868777b7-0ff7-4705-af3c-c453bb1418a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfgnh\" (UniqueName: \"kubernetes.io/projected/5ff067e9-1045-4aed-a5a3-1685140287c5-kube-api-access-vfgnh\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812456 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/85538bb7-9286-4a19-9009-89105dba2678-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gk5b\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-kube-api-access-5gk5b\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ff067e9-1045-4aed-a5a3-1685140287c5-proxy-tls\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22ca0047-9042-4627-a34d-1fab214b831a-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-config\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdftv\" (UniqueName: \"kubernetes.io/projected/063d4b06-d385-4749-8394-14041350b8e9-kube-api-access-hdftv\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w246\" (UniqueName: \"kubernetes.io/projected/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-kube-api-access-2w246\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5p48\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-kube-api-access-b5p48\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868777b7-0ff7-4705-af3c-c453bb1418a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e21a9e0e-6de3-467a-b719-761919fd008c-machine-approver-tls\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.812989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5ff067e9-1045-4aed-a5a3-1685140287c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813002 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813056 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813107 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22ca0047-9042-4627-a34d-1fab214b831a-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-images\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f88px\" (UniqueName: \"kubernetes.io/projected/b92e5626-f326-4da0-a2de-a10abaf78719-kube-api-access-f88px\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d77fa56-dcd9-464c-ae68-3f61838fd961-metrics-tls\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813223 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-config\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813247 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbc2\" (UniqueName: \"kubernetes.io/projected/e21a9e0e-6de3-467a-b719-761919fd008c-kube-api-access-csbc2\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813270 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-node-pullsecrets\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d55901-e472-477e-9a26-fea65fce74a5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nq7t\" (UniqueName: \"kubernetes.io/projected/0178dda3-3c96-409e-8dee-789ecec9a47f-kube-api-access-6nq7t\") pod \"downloads-7954f5f757-88z72\" (UID: \"0178dda3-3c96-409e-8dee-789ecec9a47f\") " pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.813784 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-image-import-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.814153 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.814193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/868777b7-0ff7-4705-af3c-c453bb1418a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.814507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.814863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.814917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47d55901-e472-477e-9a26-fea65fce74a5-available-featuregates\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.815302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-images\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.815697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.815775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-node-pullsecrets\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.816232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-encryption-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.817881 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.817984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818086 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818124 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818480 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/868777b7-0ff7-4705-af3c-c453bb1418a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818987 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-client\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.818991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819072 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-image-import-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.817843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/85538bb7-9286-4a19-9009-89105dba2678-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819784 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819815 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit-dir\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkt4q\" (UniqueName: \"kubernetes.io/projected/eba3e0f2-2704-43dd-b433-3a26b5200e77-kube-api-access-xkt4q\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819905 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819973 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-serving-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85538bb7-9286-4a19-9009-89105dba2678-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820032 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dln54\" (UniqueName: \"kubernetes.io/projected/622eb95d-1893-421b-890b-0fbd87dfa0b2-kube-api-access-dln54\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820076 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfm7\" (UniqueName: \"kubernetes.io/projected/868777b7-0ff7-4705-af3c-c453bb1418a3-kube-api-access-bkfm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820105 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820132 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6mtv\" (UniqueName: \"kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820165 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063d4b06-d385-4749-8394-14041350b8e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d55901-e472-477e-9a26-fea65fce74a5-serving-cert\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eba3e0f2-2704-43dd-b433-3a26b5200e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622eb95d-1893-421b-890b-0fbd87dfa0b2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820303 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-serving-cert\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-service-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwsmj\" (UniqueName: \"kubernetes.io/projected/814a2d48-7cb7-43bd-af05-951f0ccc9fc8-kube-api-access-mwsmj\") pod \"migrator-59844c95c7-vm425\" (UID: \"814a2d48-7cb7-43bd-af05-951f0ccc9fc8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820405 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e5626-f326-4da0-a2de-a10abaf78719-serving-cert\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4r9\" (UniqueName: \"kubernetes.io/projected/7d77fa56-dcd9-464c-ae68-3f61838fd961-kube-api-access-ft4r9\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-config\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvw4j\" (UniqueName: \"kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820544 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-auth-proxy-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.820944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.819749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit-dir\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821147 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-encryption-config\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e21a9e0e-6de3-467a-b719-761919fd008c-machine-approver-tls\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821434 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e21a9e0e-6de3-467a-b719-761919fd008c-auth-proxy-config\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-etcd-serving-ca\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.821727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.822265 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-config\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.822494 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-audit\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.822566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.822865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85538bb7-9286-4a19-9009-89105dba2678-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.823259 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.823866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-serving-cert\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.824226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/063d4b06-d385-4749-8394-14041350b8e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.824406 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b92e5626-f326-4da0-a2de-a10abaf78719-serving-cert\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.824591 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.825719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92e5626-f326-4da0-a2de-a10abaf78719-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.825993 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.826013 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d55901-e472-477e-9a26-fea65fce74a5-serving-cert\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.828484 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.829168 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.833927 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063d4b06-d385-4749-8394-14041350b8e9-config\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.834259 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eba3e0f2-2704-43dd-b433-3a26b5200e77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.849529 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.869258 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.880261 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d77fa56-dcd9-464c-ae68-3f61838fd961-metrics-tls\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.889414 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.909032 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.928939 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.949048 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.969609 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 16:54:19 crc kubenswrapper[4768]: I1124 16:54:19.988574 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.010029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.021591 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22ca0047-9042-4627-a34d-1fab214b831a-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.038739 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.046370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22ca0047-9042-4627-a34d-1fab214b831a-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.049414 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.070573 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.090074 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.109718 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.114233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.129332 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.135216 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-config\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.149736 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.169165 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.190376 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.209832 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.229872 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.236824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/622eb95d-1893-421b-890b-0fbd87dfa0b2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.249880 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.290419 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.290755 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.298546 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ff067e9-1045-4aed-a5a3-1685140287c5-proxy-tls\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.310219 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.349445 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.370429 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.389260 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.409465 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.429734 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.450407 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.470508 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.490309 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.510125 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.529557 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.549022 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.569601 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.589749 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.609690 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.627880 4768 request.go:700] Waited for 1.004323801s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0 Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.629755 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.649167 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.670100 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.689254 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.710018 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.730568 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.749963 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.770111 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.790153 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.809629 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.829441 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.850436 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.869837 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.889685 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.910726 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.929035 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.950170 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.970164 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 16:54:20 crc kubenswrapper[4768]: I1124 16:54:20.989412 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.019694 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.030294 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.050072 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.069918 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.091054 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.110023 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.150161 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.170434 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.210175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cp8w\" (UniqueName: \"kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w\") pod \"controller-manager-879f6c89f-kbq4r\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.210596 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.230208 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.250048 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.276840 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.290667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.318428 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.329381 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2k4\" (UniqueName: \"kubernetes.io/projected/b19a457f-0893-42b7-b7ac-f3b1446fbeac-kube-api-access-2d2k4\") pod \"openshift-apiserver-operator-796bbdcf4f-npfbr\" (UID: \"b19a457f-0893-42b7-b7ac-f3b1446fbeac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.330026 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.349159 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.369332 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.389135 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.410453 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.429455 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.474679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d9r5\" (UniqueName: \"kubernetes.io/projected/0418ca12-7159-4da5-8b9c-3a408822a00e-kube-api-access-9d9r5\") pod \"apiserver-7bbb656c7d-jz5sv\" (UID: \"0418ca12-7159-4da5-8b9c-3a408822a00e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.485449 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhh8p\" (UniqueName: \"kubernetes.io/projected/29755561-9db9-416d-b847-182fdb322ca5-kube-api-access-xhh8p\") pod \"console-operator-58897d9998-57xr4\" (UID: \"29755561-9db9-416d-b847-182fdb322ca5\") " pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.523690 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.529764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c79p\" (UniqueName: \"kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p\") pod \"route-controller-manager-6576b87f9c-pzf64\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.530976 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.549998 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.566002 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.569223 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.588756 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.589374 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.610306 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.629251 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.637233 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.647495 4768 request.go:700] Waited for 1.834981327s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.651121 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.658565 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.667598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf8cw\" (UniqueName: \"kubernetes.io/projected/47d55901-e472-477e-9a26-fea65fce74a5-kube-api-access-rf8cw\") pod \"openshift-config-operator-7777fb866f-fvztb\" (UID: \"47d55901-e472-477e-9a26-fea65fce74a5\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.684254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.705883 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfgnh\" (UniqueName: \"kubernetes.io/projected/5ff067e9-1045-4aed-a5a3-1685140287c5-kube-api-access-vfgnh\") pod \"machine-config-controller-84d6567774-tr27b\" (UID: \"5ff067e9-1045-4aed-a5a3-1685140287c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.733600 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gk5b\" (UniqueName: \"kubernetes.io/projected/85538bb7-9286-4a19-9009-89105dba2678-kube-api-access-5gk5b\") pod \"cluster-image-registry-operator-dc59b4c8b-qp5wr\" (UID: \"85538bb7-9286-4a19-9009-89105dba2678\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.751779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdftv\" (UniqueName: \"kubernetes.io/projected/063d4b06-d385-4749-8394-14041350b8e9-kube-api-access-hdftv\") pod \"machine-api-operator-5694c8668f-mbvp9\" (UID: \"063d4b06-d385-4749-8394-14041350b8e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.756520 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.772848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f88px\" (UniqueName: \"kubernetes.io/projected/b92e5626-f326-4da0-a2de-a10abaf78719-kube-api-access-f88px\") pod \"authentication-operator-69f744f599-vh9gq\" (UID: \"b92e5626-f326-4da0-a2de-a10abaf78719\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.773712 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.791078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w246\" (UniqueName: \"kubernetes.io/projected/0675c0cb-77d3-43c1-a7ba-ff51c9307f21-kube-api-access-2w246\") pod \"apiserver-76f77b778f-gq6hn\" (UID: \"0675c0cb-77d3-43c1-a7ba-ff51c9307f21\") " pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.791414 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.804630 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5p48\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-kube-api-access-b5p48\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.810850 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.821012 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.826270 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbc2\" (UniqueName: \"kubernetes.io/projected/e21a9e0e-6de3-467a-b719-761919fd008c-kube-api-access-csbc2\") pod \"machine-approver-56656f9798-wl6bz\" (UID: \"e21a9e0e-6de3-467a-b719-761919fd008c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.837981 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-57xr4"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.847235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nq7t\" (UniqueName: \"kubernetes.io/projected/0178dda3-3c96-409e-8dee-789ecec9a47f-kube-api-access-6nq7t\") pod \"downloads-7954f5f757-88z72\" (UID: \"0178dda3-3c96-409e-8dee-789ecec9a47f\") " pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:21 crc kubenswrapper[4768]: W1124 16:54:21.854108 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29755561_9db9_416d_b847_182fdb322ca5.slice/crio-36e8508d322b7c825f995c3dc6d86f29474c40b248f905e9078186b29df5e0c9 WatchSource:0}: Error finding container 36e8508d322b7c825f995c3dc6d86f29474c40b248f905e9078186b29df5e0c9: Status 404 returned error can't find the container with id 36e8508d322b7c825f995c3dc6d86f29474c40b248f905e9078186b29df5e0c9 Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.862393 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.869115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22ca0047-9042-4627-a34d-1fab214b831a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsz9j\" (UID: \"22ca0047-9042-4627-a34d-1fab214b831a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.882642 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6mtv\" (UniqueName: \"kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv\") pod \"oauth-openshift-558db77b4-4dgcz\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.883407 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.891601 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.906278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dln54\" (UniqueName: \"kubernetes.io/projected/622eb95d-1893-421b-890b-0fbd87dfa0b2-kube-api-access-dln54\") pod \"control-plane-machine-set-operator-78cbb6b69f-dn5t9\" (UID: \"622eb95d-1893-421b-890b-0fbd87dfa0b2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.909259 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.924627 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfm7\" (UniqueName: \"kubernetes.io/projected/868777b7-0ff7-4705-af3c-c453bb1418a3-kube-api-access-bkfm7\") pod \"openshift-controller-manager-operator-756b6f6bc6-xxdrq\" (UID: \"868777b7-0ff7-4705-af3c-c453bb1418a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.940604 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mbvp9"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.943730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkt4q\" (UniqueName: \"kubernetes.io/projected/eba3e0f2-2704-43dd-b433-3a26b5200e77-kube-api-access-xkt4q\") pod \"cluster-samples-operator-665b6dd947-55hvq\" (UID: \"eba3e0f2-2704-43dd-b433-3a26b5200e77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.962967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0953d228-0d4e-4cb5-a8d7-2a3c1709c312-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kzr98\" (UID: \"0953d228-0d4e-4cb5-a8d7-2a3c1709c312\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.972687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.983657 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvw4j\" (UniqueName: \"kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j\") pod \"console-f9d7485db-bkp5p\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.987619 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.996919 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-fvztb"] Nov 24 16:54:21 crc kubenswrapper[4768]: I1124 16:54:21.997320 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.006258 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwsmj\" (UniqueName: \"kubernetes.io/projected/814a2d48-7cb7-43bd-af05-951f0ccc9fc8-kube-api-access-mwsmj\") pod \"migrator-59844c95c7-vm425\" (UID: \"814a2d48-7cb7-43bd-af05-951f0ccc9fc8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.006984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.020935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.024017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4r9\" (UniqueName: \"kubernetes.io/projected/7d77fa56-dcd9-464c-ae68-3f61838fd961-kube-api-access-ft4r9\") pod \"dns-operator-744455d44c-rnhf2\" (UID: \"7d77fa56-dcd9-464c-ae68-3f61838fd961\") " pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.028011 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.040282 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47d55901_e472_477e_9a26_fea65fce74a5.slice/crio-e8280d86b30a64a71a478aa3396d96cd2e7a63fa72a6d734b1aeff94f5eb5af6 WatchSource:0}: Error finding container e8280d86b30a64a71a478aa3396d96cd2e7a63fa72a6d734b1aeff94f5eb5af6: Status 404 returned error can't find the container with id e8280d86b30a64a71a478aa3396d96cd2e7a63fa72a6d734b1aeff94f5eb5af6 Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.041846 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85538bb7_9286_4a19_9009_89105dba2678.slice/crio-e6c96b1cdde5ae3d433527c8e53660694e50d97092fbe1a35ca5af94c5356039 WatchSource:0}: Error finding container e6c96b1cdde5ae3d433527c8e53660694e50d97092fbe1a35ca5af94c5356039: Status 404 returned error can't find the container with id e6c96b1cdde5ae3d433527c8e53660694e50d97092fbe1a35ca5af94c5356039 Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056527 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056632 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c4d056-b780-4eb8-8860-44c16b3cb1ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056683 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4d056-b780-4eb8-8860-44c16b3cb1ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056746 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-serving-cert\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c4d056-b780-4eb8-8860-44c16b3cb1ba-config\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056853 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8z7\" (UniqueName: \"kubernetes.io/projected/77380a57-af04-4e2e-8791-d00f466f31a9-kube-api-access-8l8z7\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056919 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.056975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057023 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmth\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-client\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-config\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057091 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-service-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.057169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.058054 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.558033395 +0000 UTC m=+143.805002053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.143095 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.154871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-socket-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159540 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666315a2-e8c4-42db-849b-d4c9e0d437c1-service-ca-bundle\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-config\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159768 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-service-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjf6k\" (UniqueName: \"kubernetes.io/projected/d1fbefad-f380-42f2-a71c-6c3e42dce342-kube-api-access-fjf6k\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159905 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faed1e4b-beb0-4198-8557-5c72ac6d2566-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.159983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-node-bootstrap-token\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160006 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-webhook-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-mountpoint-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160061 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c436de-d338-440d-bbf3-35a09799cffd-metrics-tls\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160109 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160133 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-images\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160184 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22c436de-d338-440d-bbf3-35a09799cffd-config-volume\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160217 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-default-certificate\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160250 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e2fd0a-5292-4540-8e4a-8da54e5b541a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160293 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-stats-auth\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160358 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4d056-b780-4eb8-8860-44c16b3cb1ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160385 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-config\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160405 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-registration-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160467 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-metrics-certs\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrk2k\" (UniqueName: \"kubernetes.io/projected/20950f1f-be32-40c9-84e8-abb6c2650d69-kube-api-access-vrk2k\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160513 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8t8\" (UniqueName: \"kubernetes.io/projected/8b8cf1e2-836e-4240-93b8-1cb47a164953-kube-api-access-rt8t8\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8z7\" (UniqueName: \"kubernetes.io/projected/77380a57-af04-4e2e-8791-d00f466f31a9-kube-api-access-8l8z7\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.160675 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.660652938 +0000 UTC m=+143.907621606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160710 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-certs\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160741 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b8cf1e2-836e-4240-93b8-1cb47a164953-proxy-tls\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.160801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.162455 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-config\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.164458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.164980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.165105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.165880 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.167737 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.167836 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faed1e4b-beb0-4198-8557-5c72ac6d2566-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.167874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.169813 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.173225 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20950f1f-be32-40c9-84e8-abb6c2650d69-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.173301 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.173340 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wd5t\" (UniqueName: \"kubernetes.io/projected/3fe2ee62-cd6a-42be-b839-4c677251a006-kube-api-access-4wd5t\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.174231 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-serving-cert\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.174515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmth\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.174839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-client\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.175001 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.175022 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.175320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtvp\" (UniqueName: \"kubernetes.io/projected/0ffc195f-3e88-451d-8ade-f4413e41b076-kube-api-access-smtvp\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.176932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.176968 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7j5\" (UniqueName: \"kubernetes.io/projected/165a085b-f1df-4875-ad2e-d9fb56db9f48-kube-api-access-nn7j5\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.177143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2k4\" (UniqueName: \"kubernetes.io/projected/666315a2-e8c4-42db-849b-d4c9e0d437c1-kube-api-access-rj2k4\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.177410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7544\" (UniqueName: \"kubernetes.io/projected/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-kube-api-access-x7544\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.177432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-key\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.177941 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.178169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/165a085b-f1df-4875-ad2e-d9fb56db9f48-tmpfs\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.179671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faed1e4b-beb0-4198-8557-5c72ac6d2566-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.180495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdk4h\" (UniqueName: \"kubernetes.io/projected/43f897f2-d364-4b38-9345-5660dcf6e704-kube-api-access-mdk4h\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.180885 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43f897f2-d364-4b38-9345-5660dcf6e704-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c4d056-b780-4eb8-8860-44c16b3cb1ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181727 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181792 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-serving-cert\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181796 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-client\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181918 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c4d056-b780-4eb8-8860-44c16b3cb1ba-config\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.181949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jwsj\" (UniqueName: \"kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182019 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77380a57-af04-4e2e-8791-d00f466f31a9-etcd-service-ca\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fl5\" (UniqueName: \"kubernetes.io/projected/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-kube-api-access-q4fl5\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182172 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1fbefad-f380-42f2-a71c-6c3e42dce342-cert\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74cw\" (UniqueName: \"kubernetes.io/projected/c893e46b-93d8-4545-a905-f2b0cf62a746-kube-api-access-t74cw\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182249 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcf5d\" (UniqueName: \"kubernetes.io/projected/57e2fd0a-5292-4540-8e4a-8da54e5b541a-kube-api-access-lcf5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-csi-data-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182393 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-profile-collector-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182418 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j64x7\" (UniqueName: \"kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.182729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.182985 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.682968873 +0000 UTC m=+143.929937531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.183284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c4d056-b780-4eb8-8860-44c16b3cb1ba-config\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.183976 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-srv-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2fd0a-5292-4540-8e4a-8da54e5b541a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184061 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-cabundle\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184134 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-srv-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcqn\" (UniqueName: \"kubernetes.io/projected/22c436de-d338-440d-bbf3-35a09799cffd-kube-api-access-gvcqn\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184232 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-plugins-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184247 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqq55\" (UniqueName: \"kubernetes.io/projected/27e1bc8e-3020-4916-ae7e-6d07fe111973-kube-api-access-gqq55\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.184823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c4d056-b780-4eb8-8860-44c16b3cb1ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.189397 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.189665 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.190323 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.192036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77380a57-af04-4e2e-8791-d00f466f31a9-serving-cert\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.192480 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-88z72"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.202840 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.203912 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8z7\" (UniqueName: \"kubernetes.io/projected/77380a57-af04-4e2e-8791-d00f466f31a9-kube-api-access-8l8z7\") pod \"etcd-operator-b45778765-wkrfg\" (UID: \"77380a57-af04-4e2e-8791-d00f466f31a9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.224920 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89c4d056-b780-4eb8-8860-44c16b3cb1ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dtzpg\" (UID: \"89c4d056-b780-4eb8-8860-44c16b3cb1ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.243102 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.249869 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.266205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmth\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.288943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.289452 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.789427154 +0000 UTC m=+144.036395812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faed1e4b-beb0-4198-8557-5c72ac6d2566-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289566 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20950f1f-be32-40c9-84e8-abb6c2650d69-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wd5t\" (UniqueName: \"kubernetes.io/projected/3fe2ee62-cd6a-42be-b839-4c677251a006-kube-api-access-4wd5t\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-serving-cert\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289675 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289699 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289726 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtvp\" (UniqueName: \"kubernetes.io/projected/0ffc195f-3e88-451d-8ade-f4413e41b076-kube-api-access-smtvp\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2k4\" (UniqueName: \"kubernetes.io/projected/666315a2-e8c4-42db-849b-d4c9e0d437c1-kube-api-access-rj2k4\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289823 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7j5\" (UniqueName: \"kubernetes.io/projected/165a085b-f1df-4875-ad2e-d9fb56db9f48-kube-api-access-nn7j5\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289869 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7544\" (UniqueName: \"kubernetes.io/projected/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-kube-api-access-x7544\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-key\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289927 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/165a085b-f1df-4875-ad2e-d9fb56db9f48-tmpfs\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faed1e4b-beb0-4198-8557-5c72ac6d2566-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.289979 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdk4h\" (UniqueName: \"kubernetes.io/projected/43f897f2-d364-4b38-9345-5660dcf6e704-kube-api-access-mdk4h\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290006 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43f897f2-d364-4b38-9345-5660dcf6e704-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290077 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jwsj\" (UniqueName: \"kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fl5\" (UniqueName: \"kubernetes.io/projected/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-kube-api-access-q4fl5\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290188 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1fbefad-f380-42f2-a71c-6c3e42dce342-cert\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290217 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74cw\" (UniqueName: \"kubernetes.io/projected/c893e46b-93d8-4545-a905-f2b0cf62a746-kube-api-access-t74cw\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290239 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcf5d\" (UniqueName: \"kubernetes.io/projected/57e2fd0a-5292-4540-8e4a-8da54e5b541a-kube-api-access-lcf5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290261 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-csi-data-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-profile-collector-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j64x7\" (UniqueName: \"kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-srv-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2fd0a-5292-4540-8e4a-8da54e5b541a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290394 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-vh9gq"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290410 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-cabundle\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290447 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-srv-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvcqn\" (UniqueName: \"kubernetes.io/projected/22c436de-d338-440d-bbf3-35a09799cffd-kube-api-access-gvcqn\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290493 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-plugins-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqq55\" (UniqueName: \"kubernetes.io/projected/27e1bc8e-3020-4916-ae7e-6d07fe111973-kube-api-access-gqq55\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-socket-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666315a2-e8c4-42db-849b-d4c9e0d437c1-service-ca-bundle\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjf6k\" (UniqueName: \"kubernetes.io/projected/d1fbefad-f380-42f2-a71c-6c3e42dce342-kube-api-access-fjf6k\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290616 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faed1e4b-beb0-4198-8557-5c72ac6d2566-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-node-bootstrap-token\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.290668 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.790654061 +0000 UTC m=+144.037622709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-webhook-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290725 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-mountpoint-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c436de-d338-440d-bbf3-35a09799cffd-metrics-tls\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-images\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22c436de-d338-440d-bbf3-35a09799cffd-config-volume\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290803 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-default-certificate\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e2fd0a-5292-4540-8e4a-8da54e5b541a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290842 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-stats-auth\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290892 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-config\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-registration-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-metrics-certs\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290949 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrk2k\" (UniqueName: \"kubernetes.io/projected/20950f1f-be32-40c9-84e8-abb6c2650d69-kube-api-access-vrk2k\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8t8\" (UniqueName: \"kubernetes.io/projected/8b8cf1e2-836e-4240-93b8-1cb47a164953-kube-api-access-rt8t8\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.290994 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-certs\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.291008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b8cf1e2-836e-4240-93b8-1cb47a164953-proxy-tls\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.292287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-mountpoint-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.295149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.295555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-cabundle\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.295616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8b8cf1e2-836e-4240-93b8-1cb47a164953-proxy-tls\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.295723 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-plugins-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.296735 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.297873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.298059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/165a085b-f1df-4875-ad2e-d9fb56db9f48-tmpfs\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.297315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-node-bootstrap-token\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.299335 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-socket-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.299393 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.299519 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-csi-data-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.300179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666315a2-e8c4-42db-849b-d4c9e0d437c1-service-ca-bundle\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.300390 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faed1e4b-beb0-4198-8557-5c72ac6d2566-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.300518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22c436de-d338-440d-bbf3-35a09799cffd-config-volume\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.300929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20950f1f-be32-40c9-84e8-abb6c2650d69-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.301483 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-srv-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.302025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c893e46b-93d8-4545-a905-f2b0cf62a746-registration-dir\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.303058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b8cf1e2-836e-4240-93b8-1cb47a164953-images\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.303814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-config\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.304684 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-serving-cert\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.305001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faed1e4b-beb0-4198-8557-5c72ac6d2566-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.306015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22c436de-d338-440d-bbf3-35a09799cffd-metrics-tls\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.306421 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-webhook-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.306598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2fd0a-5292-4540-8e4a-8da54e5b541a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.306986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1fbefad-f380-42f2-a71c-6c3e42dce342-cert\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.307668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-default-certificate\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.308407 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/165a085b-f1df-4875-ad2e-d9fb56db9f48-apiservice-cert\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.308879 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-stats-auth\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.309021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.309428 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-profile-collector-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.310227 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57e2fd0a-5292-4540-8e4a-8da54e5b541a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.311332 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27e1bc8e-3020-4916-ae7e-6d07fe111973-srv-cert\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.312304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/43f897f2-d364-4b38-9345-5660dcf6e704-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.312388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fe2ee62-cd6a-42be-b839-4c677251a006-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.312864 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/666315a2-e8c4-42db-849b-d4c9e0d437c1-metrics-certs\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.317218 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-certs\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.317421 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0ffc195f-3e88-451d-8ade-f4413e41b076-signing-key\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.323424 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff067e9_1045_4aed_a5a3_1685140287c5.slice/crio-1bb2f42e1e5c147a308e71bb40de37830bf8560f2a4a84182bc941c0488f7dad WatchSource:0}: Error finding container 1bb2f42e1e5c147a308e71bb40de37830bf8560f2a4a84182bc941c0488f7dad: Status 404 returned error can't find the container with id 1bb2f42e1e5c147a308e71bb40de37830bf8560f2a4a84182bc941c0488f7dad Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.326603 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode21a9e0e_6de3_467a_b719_761919fd008c.slice/crio-96a66e64029e4bf399e802a79116da50f21e0f27c3108c9d59b8bc745f708ddb WatchSource:0}: Error finding container 96a66e64029e4bf399e802a79116da50f21e0f27c3108c9d59b8bc745f708ddb: Status 404 returned error can't find the container with id 96a66e64029e4bf399e802a79116da50f21e0f27c3108c9d59b8bc745f708ddb Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.331255 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2k4\" (UniqueName: \"kubernetes.io/projected/666315a2-e8c4-42db-849b-d4c9e0d437c1-kube-api-access-rj2k4\") pod \"router-default-5444994796-lcnvd\" (UID: \"666315a2-e8c4-42db-849b-d4c9e0d437c1\") " pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.345400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wd5t\" (UniqueName: \"kubernetes.io/projected/3fe2ee62-cd6a-42be-b839-4c677251a006-kube-api-access-4wd5t\") pod \"olm-operator-6b444d44fb-sznn8\" (UID: \"3fe2ee62-cd6a-42be-b839-4c677251a006\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.363312 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqq55\" (UniqueName: \"kubernetes.io/projected/27e1bc8e-3020-4916-ae7e-6d07fe111973-kube-api-access-gqq55\") pod \"catalog-operator-68c6474976-grlvv\" (UID: \"27e1bc8e-3020-4916-ae7e-6d07fe111973\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.369064 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" event={"ID":"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d","Type":"ContainerStarted","Data":"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.369111 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" event={"ID":"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d","Type":"ContainerStarted","Data":"d7e67419bafdb55b177f08cb90adf079fa249c104d5fb05c1d95faeb7c805099"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.379474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" event={"ID":"76f7811c-28c6-4764-b44a-07cbfdb400c4","Type":"ContainerStarted","Data":"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.379523 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" event={"ID":"76f7811c-28c6-4764-b44a-07cbfdb400c4","Type":"ContainerStarted","Data":"883ffa210144f79c2a2208615710ce575b7a1b1c54fc7a2c26331b21cfea5de0"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.381863 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.384810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" event={"ID":"063d4b06-d385-4749-8394-14041350b8e9","Type":"ContainerStarted","Data":"9250427d63cb343468313baad085fc686d1b6faf9d3b54714c10a700921c83bd"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.387977 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kbq4r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.388018 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.393989 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.394911 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" event={"ID":"e21a9e0e-6de3-467a-b719-761919fd008c","Type":"ContainerStarted","Data":"96a66e64029e4bf399e802a79116da50f21e0f27c3108c9d59b8bc745f708ddb"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.395030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvcqn\" (UniqueName: \"kubernetes.io/projected/22c436de-d338-440d-bbf3-35a09799cffd-kube-api-access-gvcqn\") pod \"dns-default-trtbq\" (UID: \"22c436de-d338-440d-bbf3-35a09799cffd\") " pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.403219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.404052 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.904025614 +0000 UTC m=+144.150994272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.404877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.405266 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:22.905256522 +0000 UTC m=+144.152225180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.410486 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" event={"ID":"85538bb7-9286-4a19-9009-89105dba2678","Type":"ContainerStarted","Data":"e6c96b1cdde5ae3d433527c8e53660694e50d97092fbe1a35ca5af94c5356039"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.421626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" event={"ID":"0418ca12-7159-4da5-8b9c-3a408822a00e","Type":"ContainerStarted","Data":"185a61c06a68fecb3dfda6317c855f5ef2a0bfdb6b6111558b2450cd98e75581"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.421718 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fl5\" (UniqueName: \"kubernetes.io/projected/a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a-kube-api-access-q4fl5\") pod \"machine-config-server-shbcz\" (UID: \"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a\") " pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.428020 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7j5\" (UniqueName: \"kubernetes.io/projected/165a085b-f1df-4875-ad2e-d9fb56db9f48-kube-api-access-nn7j5\") pod \"packageserver-d55dfcdfc-4w7fn\" (UID: \"165a085b-f1df-4875-ad2e-d9fb56db9f48\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.437093 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" event={"ID":"b19a457f-0893-42b7-b7ac-f3b1446fbeac","Type":"ContainerStarted","Data":"c950a526cc3c52ecc32ca8fa7a9eff6426dfb5f958f57d805a165518e79d4209"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.437141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" event={"ID":"b19a457f-0893-42b7-b7ac-f3b1446fbeac","Type":"ContainerStarted","Data":"2b27cee76d56929de797476615ff1ebf2fb57c819059c04cf6cf42dae5706916"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.441557 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.443147 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-57xr4" event={"ID":"29755561-9db9-416d-b847-182fdb322ca5","Type":"ContainerStarted","Data":"36e8508d322b7c825f995c3dc6d86f29474c40b248f905e9078186b29df5e0c9"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.445165 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jwsj\" (UniqueName: \"kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj\") pod \"collect-profiles-29400045-ttsf8\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.446571 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" event={"ID":"47d55901-e472-477e-9a26-fea65fce74a5","Type":"ContainerStarted","Data":"e8280d86b30a64a71a478aa3396d96cd2e7a63fa72a6d734b1aeff94f5eb5af6"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.450862 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gq6hn"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.456751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" event={"ID":"22ca0047-9042-4627-a34d-1fab214b831a","Type":"ContainerStarted","Data":"dde806795f319a4982690091b7c365c3c254191b5c9a5fd33b19bf6424bb6be1"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.457750 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-88z72" event={"ID":"0178dda3-3c96-409e-8dee-789ecec9a47f","Type":"ContainerStarted","Data":"226bbbfcc6f8ca6203c68f70d74d3e2abfcc3bc3ab322f07026b5b4ee4dc3939"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.459049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" event={"ID":"b92e5626-f326-4da0-a2de-a10abaf78719","Type":"ContainerStarted","Data":"475186c2a764a2d361f9bd7231dc78a22c51e0003a34ce3bbbd5d614203065fe"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.459833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" event={"ID":"5ff067e9-1045-4aed-a5a3-1685140287c5","Type":"ContainerStarted","Data":"1bb2f42e1e5c147a308e71bb40de37830bf8560f2a4a84182bc941c0488f7dad"} Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.486908 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdk4h\" (UniqueName: \"kubernetes.io/projected/43f897f2-d364-4b38-9345-5660dcf6e704-kube-api-access-mdk4h\") pod \"multus-admission-controller-857f4d67dd-vdhkx\" (UID: \"43f897f2-d364-4b38-9345-5660dcf6e704\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.488534 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faed1e4b-beb0-4198-8557-5c72ac6d2566-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ljqpv\" (UID: \"faed1e4b-beb0-4198-8557-5c72ac6d2566\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.507884 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.508596 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.509026 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.008994778 +0000 UTC m=+144.255963436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.509427 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.509806 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.009798483 +0000 UTC m=+144.256767131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.509941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7544\" (UniqueName: \"kubernetes.io/projected/f6e0460e-6a3f-41d8-97f3-a2d1e1676d53-kube-api-access-x7544\") pod \"service-ca-operator-777779d784-s4wnf\" (UID: \"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.511187 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.518946 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.527881 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rnhf2"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.528574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjf6k\" (UniqueName: \"kubernetes.io/projected/d1fbefad-f380-42f2-a71c-6c3e42dce342-kube-api-access-fjf6k\") pod \"ingress-canary-cnfj2\" (UID: \"d1fbefad-f380-42f2-a71c-6c3e42dce342\") " pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.528906 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cnfj2" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.544283 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtvp\" (UniqueName: \"kubernetes.io/projected/0ffc195f-3e88-451d-8ade-f4413e41b076-kube-api-access-smtvp\") pod \"service-ca-9c57cc56f-26hqh\" (UID: \"0ffc195f-3e88-451d-8ade-f4413e41b076\") " pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.561574 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.563420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74cw\" (UniqueName: \"kubernetes.io/projected/c893e46b-93d8-4545-a905-f2b0cf62a746-kube-api-access-t74cw\") pod \"csi-hostpathplugin-dfw5p\" (UID: \"c893e46b-93d8-4545-a905-f2b0cf62a746\") " pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.575522 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.584574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrk2k\" (UniqueName: \"kubernetes.io/projected/20950f1f-be32-40c9-84e8-abb6c2650d69-kube-api-access-vrk2k\") pod \"package-server-manager-789f6589d5-2db9l\" (UID: \"20950f1f-be32-40c9-84e8-abb6c2650d69\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.590570 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.600884 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.602180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcf5d\" (UniqueName: \"kubernetes.io/projected/57e2fd0a-5292-4540-8e4a-8da54e5b541a-kube-api-access-lcf5d\") pod \"kube-storage-version-migrator-operator-b67b599dd-xdb6h\" (UID: \"57e2fd0a-5292-4540-8e4a-8da54e5b541a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.602487 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.608622 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.610042 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.610232 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.110197047 +0000 UTC m=+144.357165705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.610679 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.611114 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.111105825 +0000 UTC m=+144.358074483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.622638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.624069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j64x7\" (UniqueName: \"kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7\") pod \"marketplace-operator-79b997595-rhk4d\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.631223 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.637940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.644016 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-shbcz" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.647078 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.651492 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.662062 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8t8\" (UniqueName: \"kubernetes.io/projected/8b8cf1e2-836e-4240-93b8-1cb47a164953-kube-api-access-rt8t8\") pod \"machine-config-operator-74547568cd-9pdk7\" (UID: \"8b8cf1e2-836e-4240-93b8-1cb47a164953\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.680514 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.703821 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425"] Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.710772 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod814a2d48_7cb7_43bd_af05_951f0ccc9fc8.slice/crio-a221c263efa5a07dba71d1a33438d6803b2fba1d817a2c034b0e044163690d21 WatchSource:0}: Error finding container a221c263efa5a07dba71d1a33438d6803b2fba1d817a2c034b0e044163690d21: Status 404 returned error can't find the container with id a221c263efa5a07dba71d1a33438d6803b2fba1d817a2c034b0e044163690d21 Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.711978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.712253 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.212206831 +0000 UTC m=+144.459175629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.712682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.713435 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.213405948 +0000 UTC m=+144.460374606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.714907 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d77fa56_dcd9_464c_ae68_3f61838fd961.slice/crio-20dfe164ffcae7184d2da0e01960956c334f1d8e4f399e8ef50d480ad1c20640 WatchSource:0}: Error finding container 20dfe164ffcae7184d2da0e01960956c334f1d8e4f399e8ef50d480ad1c20640: Status 404 returned error can't find the container with id 20dfe164ffcae7184d2da0e01960956c334f1d8e4f399e8ef50d480ad1c20640 Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.727905 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafbb3133_a1d9_48c9_a496_83babf4eb0c6.slice/crio-51a36575bf57360977a5a6c48335e611917f10b4bffd22c011c19e80ca181883 WatchSource:0}: Error finding container 51a36575bf57360977a5a6c48335e611917f10b4bffd22c011c19e80ca181883: Status 404 returned error can't find the container with id 51a36575bf57360977a5a6c48335e611917f10b4bffd22c011c19e80ca181883 Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.731508 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0953d228_0d4e_4cb5_a8d7_2a3c1709c312.slice/crio-2d81b9f31848094eacc474f0dfbd5e68ce57cd4d78d4d96b276caa4735a1595e WatchSource:0}: Error finding container 2d81b9f31848094eacc474f0dfbd5e68ce57cd4d78d4d96b276caa4735a1595e: Status 404 returned error can't find the container with id 2d81b9f31848094eacc474f0dfbd5e68ce57cd4d78d4d96b276caa4735a1595e Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.810952 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.814972 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.815082 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.31506265 +0000 UTC m=+144.562031308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.815332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.815687 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.315679879 +0000 UTC m=+144.562648537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.821824 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq"] Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.829088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.853884 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.867804 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.916174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:22 crc kubenswrapper[4768]: I1124 16:54:22.916929 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:22 crc kubenswrapper[4768]: E1124 16:54:22.917443 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.417381574 +0000 UTC m=+144.664350362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.929494 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod868777b7_0ff7_4705_af3c_c453bb1418a3.slice/crio-9a37646cecc406b67d8a675fef8c0879b33abe867c7334895b452c93e50a278f WatchSource:0}: Error finding container 9a37646cecc406b67d8a675fef8c0879b33abe867c7334895b452c93e50a278f: Status 404 returned error can't find the container with id 9a37646cecc406b67d8a675fef8c0879b33abe867c7334895b452c93e50a278f Nov 24 16:54:22 crc kubenswrapper[4768]: W1124 16:54:22.938040 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622eb95d_1893_421b_890b_0fbd87dfa0b2.slice/crio-56d45a0efead6461dde29057cd83ff6d64ba8b091ad0d2a69395cd284d7ff159 WatchSource:0}: Error finding container 56d45a0efead6461dde29057cd83ff6d64ba8b091ad0d2a69395cd284d7ff159: Status 404 returned error can't find the container with id 56d45a0efead6461dde29057cd83ff6d64ba8b091ad0d2a69395cd284d7ff159 Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.018198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.018708 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.518697256 +0000 UTC m=+144.765665914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.073466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wkrfg"] Nov 24 16:54:23 crc kubenswrapper[4768]: W1124 16:54:23.079236 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod666315a2_e8c4_42db_849b_d4c9e0d437c1.slice/crio-dd2534638fc8352ae644cee4817f7b17ead000b50dd807cbb48526e5e4818f13 WatchSource:0}: Error finding container dd2534638fc8352ae644cee4817f7b17ead000b50dd807cbb48526e5e4818f13: Status 404 returned error can't find the container with id dd2534638fc8352ae644cee4817f7b17ead000b50dd807cbb48526e5e4818f13 Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.119461 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.119937 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.619921135 +0000 UTC m=+144.866889793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: W1124 16:54:23.218154 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77380a57_af04_4e2e_8791_d00f466f31a9.slice/crio-c020d77265a69e3b597a2c531fddac6a9af938544c9da515b8b881774238ecaa WatchSource:0}: Error finding container c020d77265a69e3b597a2c531fddac6a9af938544c9da515b8b881774238ecaa: Status 404 returned error can't find the container with id c020d77265a69e3b597a2c531fddac6a9af938544c9da515b8b881774238ecaa Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.221454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.221898 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.721882588 +0000 UTC m=+144.968851246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.322497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.323229 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.82320662 +0000 UTC m=+145.070175278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.407915 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.424299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.424856 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:23.924838712 +0000 UTC m=+145.171807370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.434840 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-trtbq"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.445598 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.447443 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.476710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-shbcz" event={"ID":"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a","Type":"ContainerStarted","Data":"b3b5a6f5f25c71ba7213a4bad8086775d386612123b3fb556b22e5ad8f0c9bc9"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.498365 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lcnvd" event={"ID":"666315a2-e8c4-42db-849b-d4c9e0d437c1","Type":"ContainerStarted","Data":"dd2534638fc8352ae644cee4817f7b17ead000b50dd807cbb48526e5e4818f13"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.507341 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" event={"ID":"814a2d48-7cb7-43bd-af05-951f0ccc9fc8","Type":"ContainerStarted","Data":"a221c263efa5a07dba71d1a33438d6803b2fba1d817a2c034b0e044163690d21"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.510822 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" event={"ID":"eba3e0f2-2704-43dd-b433-3a26b5200e77","Type":"ContainerStarted","Data":"8b41789440846a43e687e667d12acaa9fc28cbbecdceb8b2b82ccb6380ed27e6"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.523368 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" event={"ID":"063d4b06-d385-4749-8394-14041350b8e9","Type":"ContainerStarted","Data":"120e980d310da85929714c91d9cb8b124eb6cb85ff513dd83ef17c83ea9a808c"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.525464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.525657 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.025627918 +0000 UTC m=+145.272596576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.525858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.525913 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" event={"ID":"622eb95d-1893-421b-890b-0fbd87dfa0b2","Type":"ContainerStarted","Data":"56d45a0efead6461dde29057cd83ff6d64ba8b091ad0d2a69395cd284d7ff159"} Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.526421 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.026403242 +0000 UTC m=+145.273371900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.534829 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" event={"ID":"e21a9e0e-6de3-467a-b719-761919fd008c","Type":"ContainerStarted","Data":"c4cbf36cb0f20cf5146db5ea885e945619fcb7dd9ebba4edd60fac85f3e161f1"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.539428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" event={"ID":"0953d228-0d4e-4cb5-a8d7-2a3c1709c312","Type":"ContainerStarted","Data":"2d81b9f31848094eacc474f0dfbd5e68ce57cd4d78d4d96b276caa4735a1595e"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.541904 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-88z72" event={"ID":"0178dda3-3c96-409e-8dee-789ecec9a47f","Type":"ContainerStarted","Data":"0c71228b4fcb63a1bb469d895d77d5dd4667a64decb0eae4835f73db0910fe7a"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.542667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.544688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" event={"ID":"22ca0047-9042-4627-a34d-1fab214b831a","Type":"ContainerStarted","Data":"2b24cb956596a8aec3d5720cf0670db9e439621cfafff433664f69dbe99366d5"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.545058 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.545115 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.556171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" event={"ID":"b92e5626-f326-4da0-a2de-a10abaf78719","Type":"ContainerStarted","Data":"e83eb01995f72404131d6f1df3c5293ff09caa3a3076448951a93f66ce85e9fc"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.568300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" event={"ID":"85538bb7-9286-4a19-9009-89105dba2678","Type":"ContainerStarted","Data":"31e28834589d2ff6d764085fda83537e23c5444ecc354f78f89b44638fa11386"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.577997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" event={"ID":"77380a57-af04-4e2e-8791-d00f466f31a9","Type":"ContainerStarted","Data":"c020d77265a69e3b597a2c531fddac6a9af938544c9da515b8b881774238ecaa"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.595531 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bkp5p" event={"ID":"afbb3133-a1d9-48c9-a496-83babf4eb0c6","Type":"ContainerStarted","Data":"51a36575bf57360977a5a6c48335e611917f10b4bffd22c011c19e80ca181883"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.595571 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.603487 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" event={"ID":"868777b7-0ff7-4705-af3c-c453bb1418a3","Type":"ContainerStarted","Data":"9a37646cecc406b67d8a675fef8c0879b33abe867c7334895b452c93e50a278f"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.605222 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" event={"ID":"47d55901-e472-477e-9a26-fea65fce74a5","Type":"ContainerStarted","Data":"68d3b9cce1535f31e8e18579b6ea411e2c70768d6a48f08c10684d47cb7e968b"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.607045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" event={"ID":"86a58543-2a12-4886-93ce-8d25432a2166","Type":"ContainerStarted","Data":"db7b499585059eba4ffca93478ba6d2e81b64acd9bc1185bb7972149d595c737"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.608393 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cnfj2"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.614287 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.616422 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.618627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-57xr4" event={"ID":"29755561-9db9-416d-b847-182fdb322ca5","Type":"ContainerStarted","Data":"ddae1a7b76c9a36a4e7feda7d94862ad0d44ccbe7e1f2330066c5a69973af83d"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.619713 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.619755 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.623021 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-npfbr" podStartSLOduration=122.62300443 podStartE2EDuration="2m2.62300443s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:23.622201465 +0000 UTC m=+144.869170123" watchObservedRunningTime="2025-11-24 16:54:23.62300443 +0000 UTC m=+144.869973088" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.623766 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-57xr4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.623811 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-57xr4" podUID="29755561-9db9-416d-b847-182fdb322ca5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.627493 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.628767 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.128743186 +0000 UTC m=+145.375711844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.637642 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vdhkx"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.648543 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-26hqh"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.653995 4768 generic.go:334] "Generic (PLEG): container finished" podID="0418ca12-7159-4da5-8b9c-3a408822a00e" containerID="03e29b61dafeabf98fd6efb7dba6c967154d2061c357d622ab82bc747c028e35" exitCode=0 Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.654186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" event={"ID":"0418ca12-7159-4da5-8b9c-3a408822a00e","Type":"ContainerDied","Data":"03e29b61dafeabf98fd6efb7dba6c967154d2061c357d622ab82bc747c028e35"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.670194 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" podStartSLOduration=121.670169059 podStartE2EDuration="2m1.670169059s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:23.664713561 +0000 UTC m=+144.911682219" watchObservedRunningTime="2025-11-24 16:54:23.670169059 +0000 UTC m=+144.917137717" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.675695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" event={"ID":"5ff067e9-1045-4aed-a5a3-1685140287c5","Type":"ContainerStarted","Data":"5844ec8487e9b3510b6b8998b230d0d905bc6216c2df403a6ab357311c89baec"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.681300 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dfw5p"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.682979 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" event={"ID":"7d77fa56-dcd9-464c-ae68-3f61838fd961","Type":"ContainerStarted","Data":"20dfe164ffcae7184d2da0e01960956c334f1d8e4f399e8ef50d480ad1c20640"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.686771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" event={"ID":"0675c0cb-77d3-43c1-a7ba-ff51c9307f21","Type":"ContainerStarted","Data":"a33bcb2bdbcedab3d8991a35ecaa9f09e2445388d0011d74464e1a79f5552e30"} Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.688586 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kbq4r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.688645 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.727232 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.729611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.731436 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.23141531 +0000 UTC m=+145.478384178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.734782 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7"] Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.740239 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l"] Nov 24 16:54:23 crc kubenswrapper[4768]: W1124 16:54:23.766671 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e2fd0a_5292_4540_8e4a_8da54e5b541a.slice/crio-9fd02840dfdcae5be73b33b3827b17bf755a929ac9b798b8f65e33a08cd70968 WatchSource:0}: Error finding container 9fd02840dfdcae5be73b33b3827b17bf755a929ac9b798b8f65e33a08cd70968: Status 404 returned error can't find the container with id 9fd02840dfdcae5be73b33b3827b17bf755a929ac9b798b8f65e33a08cd70968 Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.834084 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.834557 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.334529547 +0000 UTC m=+145.581498205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.835053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.835449 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.335437205 +0000 UTC m=+145.582405863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:23 crc kubenswrapper[4768]: W1124 16:54:23.841808 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20950f1f_be32_40c9_84e8_abb6c2650d69.slice/crio-332a586f1e73eab8bfeecd4d9752988e2dc13072f376b8e16e0bd4c091f987b9 WatchSource:0}: Error finding container 332a586f1e73eab8bfeecd4d9752988e2dc13072f376b8e16e0bd4c091f987b9: Status 404 returned error can't find the container with id 332a586f1e73eab8bfeecd4d9752988e2dc13072f376b8e16e0bd4c091f987b9 Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.857518 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:54:23 crc kubenswrapper[4768]: W1124 16:54:23.935283 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6393ad56_dadc_453f_b4f6_b7a6b52304e1.slice/crio-defafbe864f92768e2fdf900b52602860bb7dd9a01406cae9d63fec0e9359c3c WatchSource:0}: Error finding container defafbe864f92768e2fdf900b52602860bb7dd9a01406cae9d63fec0e9359c3c: Status 404 returned error can't find the container with id defafbe864f92768e2fdf900b52602860bb7dd9a01406cae9d63fec0e9359c3c Nov 24 16:54:23 crc kubenswrapper[4768]: I1124 16:54:23.935946 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:23 crc kubenswrapper[4768]: E1124 16:54:23.936967 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.436940353 +0000 UTC m=+145.683909011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.038094 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.038601 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.538586675 +0000 UTC m=+145.785555333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.139866 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.140067 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.640037042 +0000 UTC m=+145.887005700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.140377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.140676 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.640663921 +0000 UTC m=+145.887632569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.241042 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.241495 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.741477338 +0000 UTC m=+145.988445996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.343542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.344334 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.844318787 +0000 UTC m=+146.091287445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.383682 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" podStartSLOduration=122.383659645 podStartE2EDuration="2m2.383659645s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.381676464 +0000 UTC m=+145.628645122" watchObservedRunningTime="2025-11-24 16:54:24.383659645 +0000 UTC m=+145.630628303" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.444940 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.445128 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.945099703 +0000 UTC m=+146.192068351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.445316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.445699 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:24.945691251 +0000 UTC m=+146.192659909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.503432 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qp5wr" podStartSLOduration=122.503407404 podStartE2EDuration="2m2.503407404s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.459847286 +0000 UTC m=+145.706815944" watchObservedRunningTime="2025-11-24 16:54:24.503407404 +0000 UTC m=+145.750376062" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.503719 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-88z72" podStartSLOduration=123.503714503 podStartE2EDuration="2m3.503714503s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.501692951 +0000 UTC m=+145.748661629" watchObservedRunningTime="2025-11-24 16:54:24.503714503 +0000 UTC m=+145.750683161" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.540002 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-57xr4" podStartSLOduration=123.539984407 podStartE2EDuration="2m3.539984407s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.538087559 +0000 UTC m=+145.785056217" watchObservedRunningTime="2025-11-24 16:54:24.539984407 +0000 UTC m=+145.786953065" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.547174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.547369 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.047333553 +0000 UTC m=+146.294302201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.547616 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.547963 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.047950442 +0000 UTC m=+146.294919100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.649725 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.650174 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.150123151 +0000 UTC m=+146.397091809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.650462 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.650833 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.150817442 +0000 UTC m=+146.397786100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.694507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" event={"ID":"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53","Type":"ContainerStarted","Data":"81adfaf625ebf3db5fcc8bf31f7895f4897a8c35036e0e53ddb01c986b915c11"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.694566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" event={"ID":"f6e0460e-6a3f-41d8-97f3-a2d1e1676d53","Type":"ContainerStarted","Data":"b95de9ec20faeeec943a7463575b2ec2a8800ee5b7bd6e45042dfa8f3580c079"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.698565 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" event={"ID":"7d77fa56-dcd9-464c-ae68-3f61838fd961","Type":"ContainerStarted","Data":"e0c6c697aeb91699ea12090ace33fe65b1ac40823329864d6dffc754481b3f36"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.698592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" event={"ID":"7d77fa56-dcd9-464c-ae68-3f61838fd961","Type":"ContainerStarted","Data":"2dd48d3b35e5de526de813e860a0d3be6697d4b6c180aa211d90464ef4b71a8a"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.705883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" event={"ID":"814a2d48-7cb7-43bd-af05-951f0ccc9fc8","Type":"ContainerStarted","Data":"cce722b6c0fe36f19a74c8e3282e9a8900dc40d0d096562afd2de27720b546cc"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.705910 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" event={"ID":"814a2d48-7cb7-43bd-af05-951f0ccc9fc8","Type":"ContainerStarted","Data":"7b25cd109bdfd92f28a283785a1a69d3ecb6a7a3625769546fbc7b8fe7ed0583"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.717770 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-s4wnf" podStartSLOduration=122.717749938 podStartE2EDuration="2m2.717749938s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.716539601 +0000 UTC m=+145.963508259" watchObservedRunningTime="2025-11-24 16:54:24.717749938 +0000 UTC m=+145.964718596" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.718684 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" event={"ID":"77380a57-af04-4e2e-8791-d00f466f31a9","Type":"ContainerStarted","Data":"b9f400d3180e6d953c57560a2fa738d7d7c8e1d670dc2bb466fda4f7da9d48ce"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.726327 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" event={"ID":"22ca0047-9042-4627-a34d-1fab214b831a","Type":"ContainerStarted","Data":"1170ec75eaa031289743008d4cb552c86b3a6f5f5270d420eea356c2a17189f8"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.743324 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" event={"ID":"e21a9e0e-6de3-467a-b719-761919fd008c","Type":"ContainerStarted","Data":"23dcfce76578c542a98160bc324eb7d3d6583e4948a7c9ede9e1ef820834a646"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.746635 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wkrfg" podStartSLOduration=122.746615325 podStartE2EDuration="2m2.746615325s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.743830339 +0000 UTC m=+145.990799007" watchObservedRunningTime="2025-11-24 16:54:24.746615325 +0000 UTC m=+145.993583983" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.747032 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" event={"ID":"20950f1f-be32-40c9-84e8-abb6c2650d69","Type":"ContainerStarted","Data":"123e1c9949083faad56f620a2b062c526e7501389baf870b5a444445a27021eb"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.747089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" event={"ID":"20950f1f-be32-40c9-84e8-abb6c2650d69","Type":"ContainerStarted","Data":"332a586f1e73eab8bfeecd4d9752988e2dc13072f376b8e16e0bd4c091f987b9"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.749926 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" event={"ID":"0ffc195f-3e88-451d-8ade-f4413e41b076","Type":"ContainerStarted","Data":"6249579edb5707a85d538872e20f762128325545fe6d1a60da7fc73f6ad05b0f"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.749963 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" event={"ID":"0ffc195f-3e88-451d-8ade-f4413e41b076","Type":"ContainerStarted","Data":"f74c780819f0de6da4c2bd949fad71c69f35d7f1935978d82e8b3c764cfd3579"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.751660 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.752774 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.252755364 +0000 UTC m=+146.499724022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.755469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" event={"ID":"622eb95d-1893-421b-890b-0fbd87dfa0b2","Type":"ContainerStarted","Data":"15e304687ee86c03c0e432224b36d7c142e4d757ad9b37937320fbd18036e5a6"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.757437 4768 generic.go:334] "Generic (PLEG): container finished" podID="47d55901-e472-477e-9a26-fea65fce74a5" containerID="68d3b9cce1535f31e8e18579b6ea411e2c70768d6a48f08c10684d47cb7e968b" exitCode=0 Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.757511 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" event={"ID":"47d55901-e472-477e-9a26-fea65fce74a5","Type":"ContainerDied","Data":"68d3b9cce1535f31e8e18579b6ea411e2c70768d6a48f08c10684d47cb7e968b"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.767921 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" event={"ID":"86a58543-2a12-4886-93ce-8d25432a2166","Type":"ContainerStarted","Data":"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.767990 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.772473 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsz9j" podStartSLOduration=122.772461529 podStartE2EDuration="2m2.772461529s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.770261451 +0000 UTC m=+146.017230109" watchObservedRunningTime="2025-11-24 16:54:24.772461529 +0000 UTC m=+146.019430187" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.773685 4768 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4dgcz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.773757 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.776501 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" event={"ID":"43f897f2-d364-4b38-9345-5660dcf6e704","Type":"ContainerStarted","Data":"d9f82509899b275e9f7dadafafc04d0efffa6619ae07177a53b0f92c897fedbb"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.776537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" event={"ID":"43f897f2-d364-4b38-9345-5660dcf6e704","Type":"ContainerStarted","Data":"a309bfce1bff1cc5e694c2512f88d459963773a9af6722989339aaefcfb102ff"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.778339 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" event={"ID":"57e2fd0a-5292-4540-8e4a-8da54e5b541a","Type":"ContainerStarted","Data":"25431edd22f75a17558462bd6f83ae69c71464fa6823ab811be2b1fb11cf29ba"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.778374 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" event={"ID":"57e2fd0a-5292-4540-8e4a-8da54e5b541a","Type":"ContainerStarted","Data":"9fd02840dfdcae5be73b33b3827b17bf755a929ac9b798b8f65e33a08cd70968"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.786801 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" event={"ID":"27e1bc8e-3020-4916-ae7e-6d07fe111973","Type":"ContainerStarted","Data":"7c02445a3689c1ce6c8d50240363a27513cceb39edaf36c674788f05d3f980a0"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.786861 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" event={"ID":"27e1bc8e-3020-4916-ae7e-6d07fe111973","Type":"ContainerStarted","Data":"7e02d7623f1d728661be04d4e0bbd522106bbf5d688ee4948644f012567a834e"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.789200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" event={"ID":"063d4b06-d385-4749-8394-14041350b8e9","Type":"ContainerStarted","Data":"bbf4fb8fcc31fa4cd6e66b7f92f145d15b5c52ad9016e7d2246365af55e6a3a7"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.806474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" event={"ID":"c893e46b-93d8-4545-a905-f2b0cf62a746","Type":"ContainerStarted","Data":"970b0168e3fc6dfdd005408ecd3a6b899f5c966212511982a249424e5d417edc"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.823293 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" podStartSLOduration=123.82328107 podStartE2EDuration="2m3.82328107s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.806211406 +0000 UTC m=+146.053180064" watchObservedRunningTime="2025-11-24 16:54:24.82328107 +0000 UTC m=+146.070249728" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.823943 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dn5t9" podStartSLOduration=122.82393605 podStartE2EDuration="2m2.82393605s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.823398134 +0000 UTC m=+146.070366782" watchObservedRunningTime="2025-11-24 16:54:24.82393605 +0000 UTC m=+146.070904698" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.824165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" event={"ID":"faed1e4b-beb0-4198-8557-5c72ac6d2566","Type":"ContainerStarted","Data":"0c42758749c5a08d3e5dd67ec2d0bad24d5617c78290866a3c922236a44e2061"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.824219 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" event={"ID":"faed1e4b-beb0-4198-8557-5c72ac6d2566","Type":"ContainerStarted","Data":"f7df661e3952e496b5c72ef469cd559ed21ec047943a6c94fba0cf00edd547ab"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.846037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" event={"ID":"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d","Type":"ContainerStarted","Data":"769e2692ea50cf6b0edcb7b7e7c91ed8a8c3484a19c12451b191f27cf6e7fb35"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.846098 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" event={"ID":"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d","Type":"ContainerStarted","Data":"0e98e13b7888c2b8f0afa9dd98b037e9baeeeee8b519c96e9c154eb8245cb87a"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.853552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" event={"ID":"6393ad56-dadc-453f-b4f6-b7a6b52304e1","Type":"ContainerStarted","Data":"b842ad2a0550c3e6ff4623d31bdb892981a9ed84a024a19e256b0570542f11f7"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.853620 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.853630 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" event={"ID":"6393ad56-dadc-453f-b4f6-b7a6b52304e1","Type":"ContainerStarted","Data":"defafbe864f92768e2fdf900b52602860bb7dd9a01406cae9d63fec0e9359c3c"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.853955 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.855828 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.355776408 +0000 UTC m=+146.602745066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.859483 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rhk4d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.859532 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.861001 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bkp5p" event={"ID":"afbb3133-a1d9-48c9-a496-83babf4eb0c6","Type":"ContainerStarted","Data":"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.863246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-shbcz" event={"ID":"a8574e3f-d6d0-4ab4-adb4-5d2eb82dc95a","Type":"ContainerStarted","Data":"add025999e1d276d202a25000ee1b2f5a9ed6905bcbda1498b88030f1d79ea4a"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.869855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lcnvd" event={"ID":"666315a2-e8c4-42db-849b-d4c9e0d437c1","Type":"ContainerStarted","Data":"ec9ed197cd8215637c3ed8d4230b48551a6f9b0c143b68004d6d581feea11cfc"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.893418 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ljqpv" podStartSLOduration=122.893390284 podStartE2EDuration="2m2.893390284s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.865196308 +0000 UTC m=+146.112164966" watchObservedRunningTime="2025-11-24 16:54:24.893390284 +0000 UTC m=+146.140358952" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.893908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" event={"ID":"89c4d056-b780-4eb8-8860-44c16b3cb1ba","Type":"ContainerStarted","Data":"53c1a082f13ed4f0d53ca77b448e6fe0d663f9525a5897b7810db94faad2a3cb"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.893962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" event={"ID":"89c4d056-b780-4eb8-8860-44c16b3cb1ba","Type":"ContainerStarted","Data":"916ff756955cc8c956456e5c25da1b0630355c3822945737309710ea3fc9cf47"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.894367 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xdb6h" podStartSLOduration=122.894341063 podStartE2EDuration="2m2.894341063s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.892098004 +0000 UTC m=+146.139066652" watchObservedRunningTime="2025-11-24 16:54:24.894341063 +0000 UTC m=+146.141309721" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.896325 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" event={"ID":"165a085b-f1df-4875-ad2e-d9fb56db9f48","Type":"ContainerStarted","Data":"e670ac5eef00405795b7517a9416d09f58bca778016fb1ece88305491dc49a69"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.896368 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" event={"ID":"165a085b-f1df-4875-ad2e-d9fb56db9f48","Type":"ContainerStarted","Data":"ed4e1b8ce9422d366973012be2756443b43e1c7c8891709a71fd0698dc7878a6"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.897115 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.900013 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4w7fn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.900054 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" podUID="165a085b-f1df-4875-ad2e-d9fb56db9f48" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.902898 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" event={"ID":"eba3e0f2-2704-43dd-b433-3a26b5200e77","Type":"ContainerStarted","Data":"14323c6071f3123cc72355409829d4171ff131b8b4dfa755fc4739c0e236f7f7"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.902925 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" event={"ID":"eba3e0f2-2704-43dd-b433-3a26b5200e77","Type":"ContainerStarted","Data":"f648689acdbc4972049cec39fbbbd66318b1c6e6f3fe8be8745597b05db008fa"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.907398 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" event={"ID":"8b8cf1e2-836e-4240-93b8-1cb47a164953","Type":"ContainerStarted","Data":"2dd10cb813754878b5ea89a005f832cce9abe54b92254a8b476d32101e9097ed"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.907450 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" event={"ID":"8b8cf1e2-836e-4240-93b8-1cb47a164953","Type":"ContainerStarted","Data":"a97e7d6107556ca3bb54a3deac32a079214761a9192e4476afb13c9312a4415e"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.908363 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" event={"ID":"0953d228-0d4e-4cb5-a8d7-2a3c1709c312","Type":"ContainerStarted","Data":"3b7bfb9c41735c526e577b13132d99bb4deef60b0b9a6fee7766d9612a2c2df8"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.911664 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trtbq" event={"ID":"22c436de-d338-440d-bbf3-35a09799cffd","Type":"ContainerStarted","Data":"405497e41d36afda5add2ecf0c6aa689771ca3db1760482dbbc5f81c220cc887"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.911694 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trtbq" event={"ID":"22c436de-d338-440d-bbf3-35a09799cffd","Type":"ContainerStarted","Data":"7ee3fd11b49a2dd62f5b3405cf681ec97a37bd946be77c330fc1d715acbb75b9"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.912361 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mbvp9" podStartSLOduration=122.912326475 podStartE2EDuration="2m2.912326475s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.911434008 +0000 UTC m=+146.158402676" watchObservedRunningTime="2025-11-24 16:54:24.912326475 +0000 UTC m=+146.159295133" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.916900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" event={"ID":"0418ca12-7159-4da5-8b9c-3a408822a00e","Type":"ContainerStarted","Data":"02b0b878b2bb8c87dceeb85a8b381026b3f44d6f315ca227b592a2fa99e4967c"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.920747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" event={"ID":"3fe2ee62-cd6a-42be-b839-4c677251a006","Type":"ContainerStarted","Data":"e4ddc529816c2398987550775054a8b8c4d7187cea2649b4ebea2e918e07e001"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.920780 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" event={"ID":"3fe2ee62-cd6a-42be-b839-4c677251a006","Type":"ContainerStarted","Data":"1b14d7d0a75fdc5386c6dcb1128945e9b099d199e1fdb432c3fc8a009e81ea90"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.921574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.923470 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sznn8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.923521 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" podUID="3fe2ee62-cd6a-42be-b839-4c677251a006" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.932981 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" event={"ID":"5ff067e9-1045-4aed-a5a3-1685140287c5","Type":"ContainerStarted","Data":"cb38bbba051b69dede38a188496acac8841a267e0cfe14ceb1af0c678601131e"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.940622 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" podStartSLOduration=123.940600544 podStartE2EDuration="2m3.940600544s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:24.939248242 +0000 UTC m=+146.186216900" watchObservedRunningTime="2025-11-24 16:54:24.940600544 +0000 UTC m=+146.187569202" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.942683 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cnfj2" event={"ID":"d1fbefad-f380-42f2-a71c-6c3e42dce342","Type":"ContainerStarted","Data":"a78ee0897cbbaca2c76e3a9dbef519a998be87151fc509ffaf0207666ed6d705"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.942855 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cnfj2" event={"ID":"d1fbefad-f380-42f2-a71c-6c3e42dce342","Type":"ContainerStarted","Data":"41a9c0f5d2a2921b05193a46bcf6d1ee62fa33f77cf85725ae3392c1b5daf17f"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.952877 4768 generic.go:334] "Generic (PLEG): container finished" podID="0675c0cb-77d3-43c1-a7ba-ff51c9307f21" containerID="6858859e51e9295ce0cd840b0407158a1d2d9c0381f102752ab1ec60261927e4" exitCode=0 Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.952975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" event={"ID":"0675c0cb-77d3-43c1-a7ba-ff51c9307f21","Type":"ContainerDied","Data":"6858859e51e9295ce0cd840b0407158a1d2d9c0381f102752ab1ec60261927e4"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.954748 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.955094 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.455074169 +0000 UTC m=+146.702042827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.956124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:24 crc kubenswrapper[4768]: E1124 16:54:24.956558 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.456544064 +0000 UTC m=+146.703512712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.958504 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" event={"ID":"868777b7-0ff7-4705-af3c-c453bb1418a3","Type":"ContainerStarted","Data":"1ad1904e337a5128f08d9b49d2820d8e743a130b60e7af3bc8d45b4d72931fcd"} Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.959073 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-57xr4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.959151 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-57xr4" podUID="29755561-9db9-416d-b847-182fdb322ca5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.960029 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.960075 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.960988 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-kbq4r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 16:54:24 crc kubenswrapper[4768]: I1124 16:54:24.961031 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.043295 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" podStartSLOduration=123.043264288 podStartE2EDuration="2m3.043264288s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.003217087 +0000 UTC m=+146.250185765" watchObservedRunningTime="2025-11-24 16:54:25.043264288 +0000 UTC m=+146.290232946" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.043665 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" podStartSLOduration=123.04365945 podStartE2EDuration="2m3.04365945s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.038327436 +0000 UTC m=+146.285296094" watchObservedRunningTime="2025-11-24 16:54:25.04365945 +0000 UTC m=+146.290628108" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.063687 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.064469 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.564437398 +0000 UTC m=+146.811406056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.065667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.068219 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.568199034 +0000 UTC m=+146.815167692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.079916 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tr27b" podStartSLOduration=123.079887373 podStartE2EDuration="2m3.079887373s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.075145247 +0000 UTC m=+146.322113895" watchObservedRunningTime="2025-11-24 16:54:25.079887373 +0000 UTC m=+146.326856031" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.114333 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" podStartSLOduration=123.11431706 podStartE2EDuration="2m3.11431706s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.111761092 +0000 UTC m=+146.358729750" watchObservedRunningTime="2025-11-24 16:54:25.11431706 +0000 UTC m=+146.361285708" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.167826 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.168318 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.668270928 +0000 UTC m=+146.915239646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.182148 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-55hvq" podStartSLOduration=124.182120383 podStartE2EDuration="2m4.182120383s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.141179605 +0000 UTC m=+146.388148283" watchObservedRunningTime="2025-11-24 16:54:25.182120383 +0000 UTC m=+146.429089031" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.184072 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xxdrq" podStartSLOduration=123.184058343 podStartE2EDuration="2m3.184058343s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.178879694 +0000 UTC m=+146.425848372" watchObservedRunningTime="2025-11-24 16:54:25.184058343 +0000 UTC m=+146.431026991" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.227942 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" podStartSLOduration=123.22791752 podStartE2EDuration="2m3.22791752s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.224052761 +0000 UTC m=+146.471021429" watchObservedRunningTime="2025-11-24 16:54:25.22791752 +0000 UTC m=+146.474886178" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.268945 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" podStartSLOduration=123.26892895 podStartE2EDuration="2m3.26892895s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.267763914 +0000 UTC m=+146.514732572" watchObservedRunningTime="2025-11-24 16:54:25.26892895 +0000 UTC m=+146.515897608" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.269534 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.269926 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.76991216 +0000 UTC m=+147.016880818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.304401 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cnfj2" podStartSLOduration=6.304381949 podStartE2EDuration="6.304381949s" podCreationTimestamp="2025-11-24 16:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.303400839 +0000 UTC m=+146.550369507" watchObservedRunningTime="2025-11-24 16:54:25.304381949 +0000 UTC m=+146.551350607" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.343953 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-vh9gq" podStartSLOduration=124.343933794 podStartE2EDuration="2m4.343933794s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.341337574 +0000 UTC m=+146.588306232" watchObservedRunningTime="2025-11-24 16:54:25.343933794 +0000 UTC m=+146.590902452" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.370983 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.371245 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.871225432 +0000 UTC m=+147.118194090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.377092 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dtzpg" podStartSLOduration=123.377077412 podStartE2EDuration="2m3.377077412s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.376198515 +0000 UTC m=+146.623167173" watchObservedRunningTime="2025-11-24 16:54:25.377077412 +0000 UTC m=+146.624046060" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.421649 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lcnvd" podStartSLOduration=123.421628491 podStartE2EDuration="2m3.421628491s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.421374163 +0000 UTC m=+146.668342821" watchObservedRunningTime="2025-11-24 16:54:25.421628491 +0000 UTC m=+146.668597149" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.462968 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bkp5p" podStartSLOduration=124.46294409 podStartE2EDuration="2m4.46294409s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.460590307 +0000 UTC m=+146.707558965" watchObservedRunningTime="2025-11-24 16:54:25.46294409 +0000 UTC m=+146.709912758" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.472240 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.472637 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:25.972624927 +0000 UTC m=+147.219593585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.498882 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-shbcz" podStartSLOduration=6.498865953 podStartE2EDuration="6.498865953s" podCreationTimestamp="2025-11-24 16:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.496457479 +0000 UTC m=+146.743426137" watchObservedRunningTime="2025-11-24 16:54:25.498865953 +0000 UTC m=+146.745834611" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.573491 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.573670 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.07364524 +0000 UTC m=+147.320613898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.573825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.574394 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.074380973 +0000 UTC m=+147.321349631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.576182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.578101 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.578143 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.584415 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kzr98" podStartSLOduration=123.584398501 podStartE2EDuration="2m3.584398501s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.582029068 +0000 UTC m=+146.828997726" watchObservedRunningTime="2025-11-24 16:54:25.584398501 +0000 UTC m=+146.831367159" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.675445 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.675640 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.175602842 +0000 UTC m=+147.422571500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.675739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.676071 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.176059386 +0000 UTC m=+147.423028044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.777294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.777455 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.277433281 +0000 UTC m=+147.524401939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.777770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.778197 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.278188554 +0000 UTC m=+147.525157212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.878987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.879165 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.379139375 +0000 UTC m=+147.626108033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.879441 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.879760 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.379747994 +0000 UTC m=+147.626716652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.964673 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" event={"ID":"8b8cf1e2-836e-4240-93b8-1cb47a164953","Type":"ContainerStarted","Data":"10e9c229dc7942d3fd4b31051301f43149fd671f535e17dcfc7de0ba95d08675"} Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.966223 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trtbq" event={"ID":"22c436de-d338-440d-bbf3-35a09799cffd","Type":"ContainerStarted","Data":"0c4f5df2a21ab0e9a7c7d8e70619a518ff797cc48db062358506a83f4f81f5ad"} Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968027 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" event={"ID":"20950f1f-be32-40c9-84e8-abb6c2650d69","Type":"ContainerStarted","Data":"6307d41aea9cf97ef12c13fd19092f3e29b033b9a2b97b077713fce9b25dfe10"} Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968756 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sznn8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968799 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4w7fn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968811 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" podUID="3fe2ee62-cd6a-42be-b839-4c677251a006" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968837 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" podUID="165a085b-f1df-4875-ad2e-d9fb56db9f48" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968799 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.968875 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969085 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-57xr4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969102 4768 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4dgcz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969113 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-57xr4" podUID="29755561-9db9-416d-b847-182fdb322ca5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969129 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.24:6443/healthz\": dial tcp 10.217.0.24:6443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969616 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-grlvv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969648 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" podUID="27e1bc8e-3020-4916-ae7e-6d07fe111973" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969687 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rhk4d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.969714 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.970775 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.980327 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.980517 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.480483768 +0000 UTC m=+147.727452436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.980619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:25 crc kubenswrapper[4768]: E1124 16:54:25.980975 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.480963933 +0000 UTC m=+147.727932791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.985418 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wl6bz" podStartSLOduration=124.985399389 podStartE2EDuration="2m4.985399389s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.982614434 +0000 UTC m=+147.229583102" watchObservedRunningTime="2025-11-24 16:54:25.985399389 +0000 UTC m=+147.232368057" Nov 24 16:54:25 crc kubenswrapper[4768]: I1124 16:54:25.999152 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rnhf2" podStartSLOduration=123.999134881 podStartE2EDuration="2m3.999134881s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:25.997975125 +0000 UTC m=+147.244943783" watchObservedRunningTime="2025-11-24 16:54:25.999134881 +0000 UTC m=+147.246103539" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.011221 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm425" podStartSLOduration=124.011190091 podStartE2EDuration="2m4.011190091s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:26.010319815 +0000 UTC m=+147.257288473" watchObservedRunningTime="2025-11-24 16:54:26.011190091 +0000 UTC m=+147.258158749" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.081554 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.082086 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.581967556 +0000 UTC m=+147.828936394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.091963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.094083 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.594037976 +0000 UTC m=+147.841006834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.196498 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.196941 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.696921737 +0000 UTC m=+147.943890385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.298444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.299008 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.798989832 +0000 UTC m=+148.045958490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.400141 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.400340 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.900311595 +0000 UTC m=+148.147280253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.400588 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.401117 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:26.901095819 +0000 UTC m=+148.148064517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.501935 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.502098 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.002078161 +0000 UTC m=+148.249046819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.502417 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.502749 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.002740751 +0000 UTC m=+148.249709409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.578093 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.578170 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.604087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.604283 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.104255 +0000 UTC m=+148.351223658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.604641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.605058 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.105018043 +0000 UTC m=+148.351986711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.637799 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.637872 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.639633 4768 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jz5sv container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.639737 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" podUID="0418ca12-7159-4da5-8b9c-3a408822a00e" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.20:8443/livez\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.705679 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.705859 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.20583225 +0000 UTC m=+148.452800908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.706287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.706624 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.206616704 +0000 UTC m=+148.453585362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.808368 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.808549 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.308516784 +0000 UTC m=+148.555485442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.809359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.809769 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.309749412 +0000 UTC m=+148.556718070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.911249 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:26 crc kubenswrapper[4768]: E1124 16:54:26.911752 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.411719585 +0000 UTC m=+148.658688243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.976811 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" event={"ID":"43f897f2-d364-4b38-9345-5660dcf6e704","Type":"ContainerStarted","Data":"712f5de51d8ea94f73731c263d7f5beb9f95ee4a295be4c54e47b3b72a9b8cfb"} Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.979955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" event={"ID":"0675c0cb-77d3-43c1-a7ba-ff51c9307f21","Type":"ContainerStarted","Data":"1ed413a878027490aed116306ac537dee7590b47e9b2f9f80b67f264942644b9"} Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.980022 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" event={"ID":"0675c0cb-77d3-43c1-a7ba-ff51c9307f21","Type":"ContainerStarted","Data":"0b8753356adf8755201a7facef8140caac8cf0e4c65fe996f84fee8d09996b5a"} Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.982479 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" event={"ID":"47d55901-e472-477e-9a26-fea65fce74a5","Type":"ContainerStarted","Data":"56884b48b6f69b5d87f9fe69ec1e46b8f78ed3e556f227d5555f10a280dd977d"} Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983113 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-sznn8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983156 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-grlvv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983172 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" podUID="3fe2ee62-cd6a-42be-b839-4c677251a006" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983171 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4w7fn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983256 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983245 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" podUID="27e1bc8e-3020-4916-ae7e-6d07fe111973" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983288 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" podUID="165a085b-f1df-4875-ad2e-d9fb56db9f48" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.983654 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.988260 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.988328 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.989668 4768 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gq6hn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.989725 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" podUID="0675c0cb-77d3-43c1-a7ba-ff51c9307f21" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 24 16:54:26 crc kubenswrapper[4768]: I1124 16:54:26.993301 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-26hqh" podStartSLOduration=124.9932831 podStartE2EDuration="2m4.9932831s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:26.032818486 +0000 UTC m=+147.279787144" watchObservedRunningTime="2025-11-24 16:54:26.9932831 +0000 UTC m=+148.240251748" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.012594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.013057 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.513036657 +0000 UTC m=+148.760005495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.024739 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vdhkx" podStartSLOduration=125.024718246 podStartE2EDuration="2m5.024718246s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:26.996626193 +0000 UTC m=+148.243594851" watchObservedRunningTime="2025-11-24 16:54:27.024718246 +0000 UTC m=+148.271686904" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.027059 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-trtbq" podStartSLOduration=8.027042537 podStartE2EDuration="8.027042537s" podCreationTimestamp="2025-11-24 16:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:27.024044945 +0000 UTC m=+148.271013603" watchObservedRunningTime="2025-11-24 16:54:27.027042537 +0000 UTC m=+148.274011195" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.041703 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9pdk7" podStartSLOduration=125.041680197 podStartE2EDuration="2m5.041680197s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:27.040996886 +0000 UTC m=+148.287965544" watchObservedRunningTime="2025-11-24 16:54:27.041680197 +0000 UTC m=+148.288648865" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.080296 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" podStartSLOduration=125.080280423 podStartE2EDuration="2m5.080280423s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:27.077104645 +0000 UTC m=+148.324073303" watchObservedRunningTime="2025-11-24 16:54:27.080280423 +0000 UTC m=+148.327249081" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.101667 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" podStartSLOduration=126.101643689 podStartE2EDuration="2m6.101643689s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:27.101479914 +0000 UTC m=+148.348448572" watchObservedRunningTime="2025-11-24 16:54:27.101643689 +0000 UTC m=+148.348612347" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.114024 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.114441 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.614416441 +0000 UTC m=+148.861385099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.114918 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.115559 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.615541316 +0000 UTC m=+148.862509964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.142562 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" podStartSLOduration=126.142517085 podStartE2EDuration="2m6.142517085s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:27.137561892 +0000 UTC m=+148.384530550" watchObservedRunningTime="2025-11-24 16:54:27.142517085 +0000 UTC m=+148.389485763" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.218736 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.218937 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.718904041 +0000 UTC m=+148.965872699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.219807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.220173 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.72016496 +0000 UTC m=+148.967133618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.320786 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.320992 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.820961226 +0000 UTC m=+149.067929884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.321308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.321663 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.821655738 +0000 UTC m=+149.068624396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.422055 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.422564 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:27.922539186 +0000 UTC m=+149.169507844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.524618 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.526599 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.026566321 +0000 UTC m=+149.273535159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.580260 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:27 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:27 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:27 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.580749 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.625614 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.625863 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.12581789 +0000 UTC m=+149.372786548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.626329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.626657 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.126642455 +0000 UTC m=+149.373611113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.727960 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.728119 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.228086672 +0000 UTC m=+149.475055330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.728361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.728687 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.22867453 +0000 UTC m=+149.475643188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.811650 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.829668 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.829897 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.329871178 +0000 UTC m=+149.576839836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.830025 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.831000 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.330987273 +0000 UTC m=+149.577955931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.931830 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.932004 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.431970955 +0000 UTC m=+149.678939623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.932177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:27 crc kubenswrapper[4768]: E1124 16:54:27.932543 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.432531792 +0000 UTC m=+149.679500450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:27 crc kubenswrapper[4768]: I1124 16:54:27.991220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" event={"ID":"c893e46b-93d8-4545-a905-f2b0cf62a746","Type":"ContainerStarted","Data":"f877eea4e950e06c295413f7499f7147e4a2bd3f4a777955ed250687323a0f18"} Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.033323 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.033529 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.533495783 +0000 UTC m=+149.780464451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.033925 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.034286 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.534278778 +0000 UTC m=+149.781247436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.135166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.135371 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.635334042 +0000 UTC m=+149.882302700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.135832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.136873 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.636863659 +0000 UTC m=+149.883832317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.237374 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.237572 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.737544282 +0000 UTC m=+149.984512940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.238002 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.238330 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.738320835 +0000 UTC m=+149.985289483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.339157 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.339415 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.8393781 +0000 UTC m=+150.086346798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.339679 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.340093 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.840075471 +0000 UTC m=+150.087044129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.441535 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.441983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.442050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.442622 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:28.94258859 +0000 UTC m=+150.189557248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.447575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.448199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.543243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.543323 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.543387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.545884 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.045860343 +0000 UTC m=+150.292829001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.560869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.563064 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.580302 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:28 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:28 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:28 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.580384 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.604059 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.623676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.629668 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.645401 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.645700 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.145684779 +0000 UTC m=+150.392653437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.746681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.747012 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.246998101 +0000 UTC m=+150.493966759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.847710 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.848007 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.347966623 +0000 UTC m=+150.594935281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:28 crc kubenswrapper[4768]: I1124 16:54:28.951196 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:28 crc kubenswrapper[4768]: E1124 16:54:28.951722 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.45170642 +0000 UTC m=+150.698675068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.058298 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.058899 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.558858471 +0000 UTC m=+150.805827129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.096408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" event={"ID":"c893e46b-93d8-4545-a905-f2b0cf62a746","Type":"ContainerStarted","Data":"7ae7da3fe2b36e194de18975f534cfad6ad9dc74295062caeffab314df33242c"} Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.096453 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" event={"ID":"c893e46b-93d8-4545-a905-f2b0cf62a746","Type":"ContainerStarted","Data":"a0d8d3182852576ea57e6fe8a342812874888f995ce7352ee6f995d9482d21ef"} Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.160121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.161507 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.661494574 +0000 UTC m=+150.908463222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.178303 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-fvztb" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.265077 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.265517 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.765492749 +0000 UTC m=+151.012461407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.366776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.367149 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.867137581 +0000 UTC m=+151.114106239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.449689 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.450701 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: W1124 16:54:29.463662 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-542ee2ec5c384dce31657965cdfc4e006a59462b7fe30bb7611fd57187211810 WatchSource:0}: Error finding container 542ee2ec5c384dce31657965cdfc4e006a59462b7fe30bb7611fd57187211810: Status 404 returned error can't find the container with id 542ee2ec5c384dce31657965cdfc4e006a59462b7fe30bb7611fd57187211810 Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.468643 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.469064 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:29.969043502 +0000 UTC m=+151.216012160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.512696 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 16:54:29 crc kubenswrapper[4768]: W1124 16:54:29.517560 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1de72429aceebe64feed51e9f971304dc0e06deeff90fc9af4fdcf4d27783bbb WatchSource:0}: Error finding container 1de72429aceebe64feed51e9f971304dc0e06deeff90fc9af4fdcf4d27783bbb: Status 404 returned error can't find the container with id 1de72429aceebe64feed51e9f971304dc0e06deeff90fc9af4fdcf4d27783bbb Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.530194 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.578266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.578315 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.578374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbpm\" (UniqueName: \"kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.578437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.578792 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.078779303 +0000 UTC m=+151.325747961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.586104 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:29 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:29 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:29 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.586449 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.679743 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.681020 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.681110 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.682115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.682309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.682575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbpm\" (UniqueName: \"kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.682633 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.682658 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.683034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.683112 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.183095427 +0000 UTC m=+151.430064075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.683329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.687708 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.688877 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.690847 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.691044 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.710408 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.744036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbpm\" (UniqueName: \"kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm\") pod \"community-operators-cw5r9\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.775651 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.776611 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.782607 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783512 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55bsx\" (UniqueName: \"kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783831 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.783855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.784108 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.28409606 +0000 UTC m=+151.531064708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.815115 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.885929 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886138 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55bsx\" (UniqueName: \"kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7j5\" (UniqueName: \"kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886252 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886315 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886357 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.886622 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.386600449 +0000 UTC m=+151.633569107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886780 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.886857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.887115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.905939 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.920055 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55bsx\" (UniqueName: \"kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx\") pod \"certified-operators-llvqz\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.971595 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.972569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.987989 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.988197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.988256 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.988314 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.988367 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7j5\" (UniqueName: \"kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: E1124 16:54:29.988696 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.488680855 +0000 UTC m=+151.735649513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.988949 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:29 crc kubenswrapper[4768]: I1124 16:54:29.991386 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.041182 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7j5\" (UniqueName: \"kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5\") pod \"community-operators-m9k7h\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.043204 4768 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.089547 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.089762 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zq75\" (UniqueName: \"kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.089786 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.089863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.089972 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.589956646 +0000 UTC m=+151.836925304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.135680 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.145774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8f2e2feb430d2cfd640231257b1dc8f4c4f7fcf34c1076d51bbe40e80d62dd2a"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.145828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e9170dcdc49c99df3c8a55a4ffb2983c8601bdb99ca61f01a1aed2a26299038a"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.149481 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.158048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e2a17beb6c6b2bafd17814f1c031c8c7cbcf2bd2864104b4207664426b925dd1"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.158108 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1de72429aceebe64feed51e9f971304dc0e06deeff90fc9af4fdcf4d27783bbb"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.158693 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.165050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" event={"ID":"c893e46b-93d8-4545-a905-f2b0cf62a746","Type":"ContainerStarted","Data":"6254698070c595c6fdbb059be5184f6f3a0ef73e955ef3aa616b68bd0507428f"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.172539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.181822 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0bc8402bcffac07d96ea535a6295700c554806d6adec19ed142fb161d41d6e81"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.181880 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"542ee2ec5c384dce31657965cdfc4e006a59462b7fe30bb7611fd57187211810"} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.193143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.193194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.193254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zq75\" (UniqueName: \"kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.193272 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.194865 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.694849818 +0000 UTC m=+151.941818476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.195000 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.195219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.245555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zq75\" (UniqueName: \"kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75\") pod \"certified-operators-skz5c\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.297603 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.299051 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.799029978 +0000 UTC m=+152.045998636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.319060 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.401515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.401862 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:30.901849087 +0000 UTC m=+152.148817735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.490573 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dfw5p" podStartSLOduration=11.490549021 podStartE2EDuration="11.490549021s" podCreationTimestamp="2025-11-24 16:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:30.260864306 +0000 UTC m=+151.507832964" watchObservedRunningTime="2025-11-24 16:54:30.490549021 +0000 UTC m=+151.737517679" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.492980 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.502396 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.502808 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:31.002789507 +0000 UTC m=+152.249758165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.507763 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.580613 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:30 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:30 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:30 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.581028 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.612708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.613049 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 16:54:31.113036344 +0000 UTC m=+152.360004992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mgmbb" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.721008 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:30 crc kubenswrapper[4768]: E1124 16:54:30.721378 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 16:54:31.221361312 +0000 UTC m=+152.468329970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.732666 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.743800 4768 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T16:54:30.04321636Z","Handler":null,"Name":""} Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.794581 4768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.794622 4768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.822667 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.835576 4768 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.835620 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.861865 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.935942 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:54:30 crc kubenswrapper[4768]: W1124 16:54:30.956579 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod416b2c6c_bf32_4f82_98d6_75abc55f3118.slice/crio-0e37e03592c0979ea92d5428251fda9a170eb3697b9c9c5667433aebbbcecc84 WatchSource:0}: Error finding container 0e37e03592c0979ea92d5428251fda9a170eb3697b9c9c5667433aebbbcecc84: Status 404 returned error can't find the container with id 0e37e03592c0979ea92d5428251fda9a170eb3697b9c9c5667433aebbbcecc84 Nov 24 16:54:30 crc kubenswrapper[4768]: I1124 16:54:30.963181 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mgmbb\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.026081 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.077937 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.149561 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.188949 4768 generic.go:334] "Generic (PLEG): container finished" podID="34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" containerID="769e2692ea50cf6b0edcb7b7e7c91ed8a8c3484a19c12451b191f27cf6e7fb35" exitCode=0 Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.189030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" event={"ID":"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d","Type":"ContainerDied","Data":"769e2692ea50cf6b0edcb7b7e7c91ed8a8c3484a19c12451b191f27cf6e7fb35"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.190860 4768 generic.go:334] "Generic (PLEG): container finished" podID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerID="e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a" exitCode=0 Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.190936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerDied","Data":"e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.190970 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerStarted","Data":"ba7ed9a29d70098c18b6d8465d1b21a76ddbcc857a11d5d144b7b599af163b4c"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.194086 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.198314 4768 generic.go:334] "Generic (PLEG): container finished" podID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerID="450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc" exitCode=0 Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.198475 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerDied","Data":"450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.198517 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerStarted","Data":"4ce5fb87caf96bfad9a7d7d4bc00498490651429d06115783369ae79393c831b"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.199995 4768 generic.go:334] "Generic (PLEG): container finished" podID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerID="8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050" exitCode=0 Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.200060 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerDied","Data":"8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.200095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerStarted","Data":"0e37e03592c0979ea92d5428251fda9a170eb3697b9c9c5667433aebbbcecc84"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.208043 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerID="980bbb656bc36ce2edb99c00240895909441209bafdafce9245e39fe449ef1a0" exitCode=0 Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.208129 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerDied","Data":"980bbb656bc36ce2edb99c00240895909441209bafdafce9245e39fe449ef1a0"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.208165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerStarted","Data":"50d76ad47bcdeb2a641200aa7ce91bfc20671b7297b767a1bc9813fc838affcf"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.224539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"395b04f1-7a44-4f3a-bcde-42bfc7a50e43","Type":"ContainerStarted","Data":"23b5d6fd988e9c73557f52dc1e65a161abae91d9ed664da806a35ccb87823598"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.225820 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"395b04f1-7a44-4f3a-bcde-42bfc7a50e43","Type":"ContainerStarted","Data":"eb807acb93898bd023a4cb1ffc9df8a5233b990508c8960bb1b4a62d165b68e6"} Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.327235 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.331606 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.33139468 podStartE2EDuration="2.33139468s" podCreationTimestamp="2025-11-24 16:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:31.331329838 +0000 UTC m=+152.578298496" watchObservedRunningTime="2025-11-24 16:54:31.33139468 +0000 UTC m=+152.578363338" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.374761 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.376097 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.378556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.389685 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.447986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.448039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.448168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khs6\" (UniqueName: \"kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.454162 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.549133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.549202 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.549321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khs6\" (UniqueName: \"kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.549886 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.549947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.570384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khs6\" (UniqueName: \"kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6\") pod \"redhat-marketplace-tgh5z\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.580794 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:31 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:31 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:31 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.581179 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.591415 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.592045 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.594846 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.644215 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.657836 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jz5sv" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.668196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-57xr4" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.695441 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.779147 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.780641 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.804175 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.856209 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5s5m\" (UniqueName: \"kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.856275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.856298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.864431 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.864446 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.864492 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.864506 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.948439 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.958000 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5s5m\" (UniqueName: \"kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.958058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.958086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.958593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.958746 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.978866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5s5m\" (UniqueName: \"kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m\") pod \"redhat-marketplace-kzc9b\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.995154 4768 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gq6hn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]log ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]etcd ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/generic-apiserver-start-informers ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/max-in-flight-filter ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 24 16:54:31 crc kubenswrapper[4768]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 24 16:54:31 crc kubenswrapper[4768]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/project.openshift.io-projectcache ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-startinformers ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 24 16:54:31 crc kubenswrapper[4768]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 24 16:54:31 crc kubenswrapper[4768]: livez check failed Nov 24 16:54:31 crc kubenswrapper[4768]: I1124 16:54:31.995226 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" podUID="0675c0cb-77d3-43c1-a7ba-ff51c9307f21" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.039038 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.117079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.156174 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.156217 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.158970 4768 patch_prober.go:28] interesting pod/console-f9d7485db-bkp5p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.159041 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bkp5p" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.232309 4768 generic.go:334] "Generic (PLEG): container finished" podID="395b04f1-7a44-4f3a-bcde-42bfc7a50e43" containerID="23b5d6fd988e9c73557f52dc1e65a161abae91d9ed664da806a35ccb87823598" exitCode=0 Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.232400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"395b04f1-7a44-4f3a-bcde-42bfc7a50e43","Type":"ContainerDied","Data":"23b5d6fd988e9c73557f52dc1e65a161abae91d9ed664da806a35ccb87823598"} Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.244301 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerStarted","Data":"50ceec3c95a09825bec9edd60974fe2c7c87b5c357cdbc9fe8311c82a29b9e61"} Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.244689 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerStarted","Data":"01839dedb38bf58e3060a0b21f375a4be7388f461b9710d0605003935a2693dc"} Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.253387 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" event={"ID":"38d3cf53-6a1c-4009-9b0a-0638aae38656","Type":"ContainerStarted","Data":"9a3c92150f0a02a172deff3b9bf9821fb05d24a601c8bb39569128e7909d3c49"} Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.253469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" event={"ID":"38d3cf53-6a1c-4009-9b0a-0638aae38656","Type":"ContainerStarted","Data":"8f8522fe62687af201b90503264ff900f3f0681a25686a7a7abb4e5d32c0c9ae"} Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.253888 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.275646 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.276461 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.283166 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.283389 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.283852 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" podStartSLOduration=130.283822928 podStartE2EDuration="2m10.283822928s" podCreationTimestamp="2025-11-24 16:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:32.281964551 +0000 UTC m=+153.528933209" watchObservedRunningTime="2025-11-24 16:54:32.283822928 +0000 UTC m=+153.530791586" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.291856 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.374908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.375679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.476567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.476646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.477001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.502262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.504809 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.537225 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.576050 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.577088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume\") pod \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.577147 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jwsj\" (UniqueName: \"kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj\") pod \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.577253 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume\") pod \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\" (UID: \"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d\") " Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.579485 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume" (OuterVolumeSpecName: "config-volume") pod "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" (UID: "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.583889 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:32 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:32 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:32 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.583931 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.584167 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" (UID: "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.588936 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj" (OuterVolumeSpecName: "kube-api-access-9jwsj") pod "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" (UID: "34d6a2c2-3620-4dd5-a7fd-a160030b3c7d"). InnerVolumeSpecName "kube-api-access-9jwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.600011 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-grlvv" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.611338 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sznn8" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.624951 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4w7fn" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.626000 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.688664 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.688994 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.689020 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jwsj\" (UniqueName: \"kubernetes.io/projected/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d-kube-api-access-9jwsj\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.807332 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:54:32 crc kubenswrapper[4768]: E1124 16:54:32.807621 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" containerName="collect-profiles" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.807636 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" containerName="collect-profiles" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.807761 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" containerName="collect-profiles" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.808694 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.818779 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.829164 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.892297 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.892372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.892398 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9htb\" (UniqueName: \"kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:32 crc kubenswrapper[4768]: I1124 16:54:32.931772 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.001103 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.001564 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.001589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9htb\" (UniqueName: \"kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.003376 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.003500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.036276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9htb\" (UniqueName: \"kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb\") pod \"redhat-operators-8vkl6\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.112566 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 16:54:33 crc kubenswrapper[4768]: W1124 16:54:33.147756 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode7fb6ca0_4b81_490e_9463_87a297babdda.slice/crio-02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6 WatchSource:0}: Error finding container 02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6: Status 404 returned error can't find the container with id 02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6 Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.159922 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.168232 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.169154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.211887 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.281554 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.283553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8" event={"ID":"34d6a2c2-3620-4dd5-a7fd-a160030b3c7d","Type":"ContainerDied","Data":"0e98e13b7888c2b8f0afa9dd98b037e9baeeeee8b519c96e9c154eb8245cb87a"} Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.283642 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e98e13b7888c2b8f0afa9dd98b037e9baeeeee8b519c96e9c154eb8245cb87a" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.290885 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e7fb6ca0-4b81-490e-9463-87a297babdda","Type":"ContainerStarted","Data":"02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6"} Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.304453 4768 generic.go:334] "Generic (PLEG): container finished" podID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerID="50ceec3c95a09825bec9edd60974fe2c7c87b5c357cdbc9fe8311c82a29b9e61" exitCode=0 Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.304515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerDied","Data":"50ceec3c95a09825bec9edd60974fe2c7c87b5c357cdbc9fe8311c82a29b9e61"} Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.311519 4768 generic.go:334] "Generic (PLEG): container finished" podID="82e36f11-c00f-4548-b15d-a13a98dae032" containerID="8ad761d9123b09bd6acfc8585b89eb0ca20143a7b4705f993093074b87212d0d" exitCode=0 Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.312177 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerDied","Data":"8ad761d9123b09bd6acfc8585b89eb0ca20143a7b4705f993093074b87212d0d"} Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.312200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerStarted","Data":"93ac5974c9951680955616d6e6c4a0ef4e54624e9fad920d00a458c18d60dbbf"} Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.313410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.313490 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.313533 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5vg\" (UniqueName: \"kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.414744 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5vg\" (UniqueName: \"kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.415065 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.415191 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.415617 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.416569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.436930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5vg\" (UniqueName: \"kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg\") pod \"redhat-operators-vddcd\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.568647 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.584784 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:33 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:33 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:33 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.584910 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.708014 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.721301 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:54:33 crc kubenswrapper[4768]: W1124 16:54:33.745077 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff262f7e_1ff3_47a7_8346_3e91a6d3583d.slice/crio-1ff88e20ffaf18bf848b4c12636dac8aabcf0821ba479e6672e67bf8ecb0740d WatchSource:0}: Error finding container 1ff88e20ffaf18bf848b4c12636dac8aabcf0821ba479e6672e67bf8ecb0740d: Status 404 returned error can't find the container with id 1ff88e20ffaf18bf848b4c12636dac8aabcf0821ba479e6672e67bf8ecb0740d Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.820882 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir\") pod \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.820969 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access\") pod \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\" (UID: \"395b04f1-7a44-4f3a-bcde-42bfc7a50e43\") " Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.821639 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "395b04f1-7a44-4f3a-bcde-42bfc7a50e43" (UID: "395b04f1-7a44-4f3a-bcde-42bfc7a50e43"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.827911 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "395b04f1-7a44-4f3a-bcde-42bfc7a50e43" (UID: "395b04f1-7a44-4f3a-bcde-42bfc7a50e43"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.920206 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.922998 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:33 crc kubenswrapper[4768]: I1124 16:54:33.923019 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/395b04f1-7a44-4f3a-bcde-42bfc7a50e43-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:33 crc kubenswrapper[4768]: W1124 16:54:33.930436 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d479d0_7e74_4a1a_8da3_7a42b959b0a5.slice/crio-3c641f0029f0e3115c1cc7fea91afc8efec966442ccc1d01ed1a1216c10969d3 WatchSource:0}: Error finding container 3c641f0029f0e3115c1cc7fea91afc8efec966442ccc1d01ed1a1216c10969d3: Status 404 returned error can't find the container with id 3c641f0029f0e3115c1cc7fea91afc8efec966442ccc1d01ed1a1216c10969d3 Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.323608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerStarted","Data":"3c641f0029f0e3115c1cc7fea91afc8efec966442ccc1d01ed1a1216c10969d3"} Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.326891 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerID="de8a9240181b92fe68cbd33911314a26578dc256a83a3bead1dde78c12dffcd5" exitCode=0 Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.326989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerDied","Data":"de8a9240181b92fe68cbd33911314a26578dc256a83a3bead1dde78c12dffcd5"} Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.327054 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerStarted","Data":"1ff88e20ffaf18bf848b4c12636dac8aabcf0821ba479e6672e67bf8ecb0740d"} Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.331639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"395b04f1-7a44-4f3a-bcde-42bfc7a50e43","Type":"ContainerDied","Data":"eb807acb93898bd023a4cb1ffc9df8a5233b990508c8960bb1b4a62d165b68e6"} Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.331670 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.331664 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb807acb93898bd023a4cb1ffc9df8a5233b990508c8960bb1b4a62d165b68e6" Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.334098 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e7fb6ca0-4b81-490e-9463-87a297babdda","Type":"ContainerStarted","Data":"356fa9fd299339791ddd3ad55e1121abe71439054306dd1a0be6f96bafa5cfe5"} Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.372452 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.372216881 podStartE2EDuration="2.372216881s" podCreationTimestamp="2025-11-24 16:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:54:34.355467486 +0000 UTC m=+155.602436164" watchObservedRunningTime="2025-11-24 16:54:34.372216881 +0000 UTC m=+155.619185559" Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.581405 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:34 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:34 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:34 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.581479 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.659566 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-trtbq" Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.893513 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:54:34 crc kubenswrapper[4768]: I1124 16:54:34.893576 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.350261 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerID="9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc" exitCode=0 Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.350418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerDied","Data":"9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc"} Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.359581 4768 generic.go:334] "Generic (PLEG): container finished" podID="e7fb6ca0-4b81-490e-9463-87a297babdda" containerID="356fa9fd299339791ddd3ad55e1121abe71439054306dd1a0be6f96bafa5cfe5" exitCode=0 Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.359620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e7fb6ca0-4b81-490e-9463-87a297babdda","Type":"ContainerDied","Data":"356fa9fd299339791ddd3ad55e1121abe71439054306dd1a0be6f96bafa5cfe5"} Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.579857 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:35 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:35 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:35 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:35 crc kubenswrapper[4768]: I1124 16:54:35.579932 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:36 crc kubenswrapper[4768]: I1124 16:54:36.580391 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:36 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:36 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:36 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:36 crc kubenswrapper[4768]: I1124 16:54:36.580462 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:36 crc kubenswrapper[4768]: I1124 16:54:36.994752 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:37 crc kubenswrapper[4768]: I1124 16:54:36.999993 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gq6hn" Nov 24 16:54:37 crc kubenswrapper[4768]: I1124 16:54:37.581742 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:37 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:37 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:37 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:37 crc kubenswrapper[4768]: I1124 16:54:37.581806 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:38 crc kubenswrapper[4768]: I1124 16:54:38.579305 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:38 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:38 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:38 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:38 crc kubenswrapper[4768]: I1124 16:54:38.579404 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:39 crc kubenswrapper[4768]: I1124 16:54:39.579147 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:39 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:39 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:39 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:39 crc kubenswrapper[4768]: I1124 16:54:39.579593 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:40 crc kubenswrapper[4768]: I1124 16:54:40.579021 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:40 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:40 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:40 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:40 crc kubenswrapper[4768]: I1124 16:54:40.579082 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.579156 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:41 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:41 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:41 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.579243 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.865337 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.865393 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-88z72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.865404 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:41 crc kubenswrapper[4768]: I1124 16:54:41.865452 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-88z72" podUID="0178dda3-3c96-409e-8dee-789ecec9a47f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 24 16:54:42 crc kubenswrapper[4768]: I1124 16:54:42.156282 4768 patch_prober.go:28] interesting pod/console-f9d7485db-bkp5p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 24 16:54:42 crc kubenswrapper[4768]: I1124 16:54:42.156337 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bkp5p" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 24 16:54:42 crc kubenswrapper[4768]: I1124 16:54:42.578490 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:42 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:42 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:42 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:42 crc kubenswrapper[4768]: I1124 16:54:42.578615 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:43 crc kubenswrapper[4768]: I1124 16:54:43.579517 4768 patch_prober.go:28] interesting pod/router-default-5444994796-lcnvd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 16:54:43 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Nov 24 16:54:43 crc kubenswrapper[4768]: [+]process-running ok Nov 24 16:54:43 crc kubenswrapper[4768]: healthz check failed Nov 24 16:54:43 crc kubenswrapper[4768]: I1124 16:54:43.580087 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lcnvd" podUID="666315a2-e8c4-42db-849b-d4c9e0d437c1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.300528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.309689 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ff18637c-91e0-4ea4-9f9a-53c5b0277927-metrics-certs\") pod \"network-metrics-daemon-275xl\" (UID: \"ff18637c-91e0-4ea4-9f9a-53c5b0277927\") " pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.404886 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-275xl" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.581561 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.584415 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lcnvd" Nov 24 16:54:44 crc kubenswrapper[4768]: I1124 16:54:44.902160 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.123094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access\") pod \"e7fb6ca0-4b81-490e-9463-87a297babdda\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.124152 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir\") pod \"e7fb6ca0-4b81-490e-9463-87a297babdda\" (UID: \"e7fb6ca0-4b81-490e-9463-87a297babdda\") " Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.124701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7fb6ca0-4b81-490e-9463-87a297babdda" (UID: "e7fb6ca0-4b81-490e-9463-87a297babdda"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.126177 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7fb6ca0-4b81-490e-9463-87a297babdda-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.132181 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7fb6ca0-4b81-490e-9463-87a297babdda" (UID: "e7fb6ca0-4b81-490e-9463-87a297babdda"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.228077 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7fb6ca0-4b81-490e-9463-87a297babdda-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.449559 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e7fb6ca0-4b81-490e-9463-87a297babdda","Type":"ContainerDied","Data":"02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6"} Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.449652 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c01feebb8ed09466f9e70193273fdb50183c3ed8d378e838db99e77f68a9c6" Nov 24 16:54:45 crc kubenswrapper[4768]: I1124 16:54:45.449787 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 16:54:51 crc kubenswrapper[4768]: I1124 16:54:51.157223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 16:54:51 crc kubenswrapper[4768]: I1124 16:54:51.896697 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-88z72" Nov 24 16:54:52 crc kubenswrapper[4768]: I1124 16:54:52.161630 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:54:52 crc kubenswrapper[4768]: I1124 16:54:52.166488 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 16:55:01 crc kubenswrapper[4768]: E1124 16:55:01.839448 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 16:55:01 crc kubenswrapper[4768]: E1124 16:55:01.840740 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6khs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tgh5z_openshift-marketplace(b2069e94-bdf1-4d31-9294-e19c0393e478): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:01 crc kubenswrapper[4768]: E1124 16:55:01.842055 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tgh5z" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" Nov 24 16:55:02 crc kubenswrapper[4768]: I1124 16:55:02.612949 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2db9l" Nov 24 16:55:04 crc kubenswrapper[4768]: I1124 16:55:04.893105 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:55:04 crc kubenswrapper[4768]: I1124 16:55:04.893531 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:55:08 crc kubenswrapper[4768]: I1124 16:55:08.683534 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.285593 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.286214 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt7j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m9k7h_openshift-marketplace(bd7fb843-d66e-46c2-9eed-e8525f79b7ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.287422 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m9k7h" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.820614 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.820798 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzbpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cw5r9_openshift-marketplace(291b46fc-d3a5-457b-a85a-306f37d45ecc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:17 crc kubenswrapper[4768]: E1124 16:55:17.822050 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cw5r9" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" Nov 24 16:55:23 crc kubenswrapper[4768]: E1124 16:55:23.066295 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m9k7h" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" Nov 24 16:55:23 crc kubenswrapper[4768]: E1124 16:55:23.066425 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cw5r9" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" Nov 24 16:55:23 crc kubenswrapper[4768]: I1124 16:55:23.479674 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-275xl"] Nov 24 16:55:23 crc kubenswrapper[4768]: I1124 16:55:23.712967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-275xl" event={"ID":"ff18637c-91e0-4ea4-9f9a-53c5b0277927","Type":"ContainerStarted","Data":"84705468892c0b6a5081b574fb84a8ca0cd8a7b717278ebbce4fbfaa2bb3b484"} Nov 24 16:55:23 crc kubenswrapper[4768]: E1124 16:55:23.940750 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 16:55:23 crc kubenswrapper[4768]: E1124 16:55:23.941032 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9htb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8vkl6_openshift-marketplace(ff262f7e-1ff3-47a7-8346-3e91a6d3583d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:23 crc kubenswrapper[4768]: E1124 16:55:23.942324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8vkl6" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" Nov 24 16:55:24 crc kubenswrapper[4768]: I1124 16:55:24.721432 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerStarted","Data":"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3"} Nov 24 16:55:24 crc kubenswrapper[4768]: I1124 16:55:24.724207 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerStarted","Data":"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7"} Nov 24 16:55:24 crc kubenswrapper[4768]: I1124 16:55:24.726313 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-275xl" event={"ID":"ff18637c-91e0-4ea4-9f9a-53c5b0277927","Type":"ContainerStarted","Data":"994d7678042234f764ca85f441448b3ca1985f876b81817913290a6c938e7539"} Nov 24 16:55:24 crc kubenswrapper[4768]: E1124 16:55:24.732657 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8vkl6" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.512776 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.513398 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5s5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kzc9b_openshift-marketplace(82e36f11-c00f-4548-b15d-a13a98dae032): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.514869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kzc9b" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.735560 4768 generic.go:334] "Generic (PLEG): container finished" podID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerID="982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7" exitCode=0 Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.735618 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerDied","Data":"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7"} Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.739729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-275xl" event={"ID":"ff18637c-91e0-4ea4-9f9a-53c5b0277927","Type":"ContainerStarted","Data":"69dd50de1823e894325acc8874f5e5a11bb987cc3a39ae7623259f210ca833ec"} Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.742225 4768 generic.go:334] "Generic (PLEG): container finished" podID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerID="111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3" exitCode=0 Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.742373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerDied","Data":"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3"} Nov 24 16:55:25 crc kubenswrapper[4768]: I1124 16:55:25.786753 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-275xl" podStartSLOduration=184.78670975 podStartE2EDuration="3m4.78670975s" podCreationTimestamp="2025-11-24 16:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:55:25.781629118 +0000 UTC m=+207.028597836" watchObservedRunningTime="2025-11-24 16:55:25.78670975 +0000 UTC m=+207.033678408" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.914548 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.914780 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bp5vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vddcd_openshift-marketplace(b0d479d0-7e74-4a1a-8da3-7a42b959b0a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.916699 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vddcd" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" Nov 24 16:55:25 crc kubenswrapper[4768]: E1124 16:55:25.990465 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kzc9b" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" Nov 24 16:55:26 crc kubenswrapper[4768]: I1124 16:55:26.750128 4768 generic.go:334] "Generic (PLEG): container finished" podID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerID="27685d0747c7dd483022545f1ad8cb40d18a69229dc57dcac287b8852c7c3f88" exitCode=0 Nov 24 16:55:26 crc kubenswrapper[4768]: I1124 16:55:26.750254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerDied","Data":"27685d0747c7dd483022545f1ad8cb40d18a69229dc57dcac287b8852c7c3f88"} Nov 24 16:55:27 crc kubenswrapper[4768]: E1124 16:55:27.036770 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vddcd" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" Nov 24 16:55:27 crc kubenswrapper[4768]: I1124 16:55:27.764647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerStarted","Data":"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f"} Nov 24 16:55:27 crc kubenswrapper[4768]: I1124 16:55:27.776486 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerStarted","Data":"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd"} Nov 24 16:55:27 crc kubenswrapper[4768]: I1124 16:55:27.793636 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llvqz" podStartSLOduration=2.662755832 podStartE2EDuration="58.793609926s" podCreationTimestamp="2025-11-24 16:54:29 +0000 UTC" firstStartedPulling="2025-11-24 16:54:31.193798853 +0000 UTC m=+152.440767511" lastFinishedPulling="2025-11-24 16:55:27.324652947 +0000 UTC m=+208.571621605" observedRunningTime="2025-11-24 16:55:27.789535506 +0000 UTC m=+209.036504164" watchObservedRunningTime="2025-11-24 16:55:27.793609926 +0000 UTC m=+209.040578614" Nov 24 16:55:27 crc kubenswrapper[4768]: I1124 16:55:27.825640 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-skz5c" podStartSLOduration=3.421809497 podStartE2EDuration="58.825616026s" podCreationTimestamp="2025-11-24 16:54:29 +0000 UTC" firstStartedPulling="2025-11-24 16:54:31.205273296 +0000 UTC m=+152.452241954" lastFinishedPulling="2025-11-24 16:55:26.609079825 +0000 UTC m=+207.856048483" observedRunningTime="2025-11-24 16:55:27.817587811 +0000 UTC m=+209.064556479" watchObservedRunningTime="2025-11-24 16:55:27.825616026 +0000 UTC m=+209.072584694" Nov 24 16:55:28 crc kubenswrapper[4768]: I1124 16:55:28.783733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerStarted","Data":"c355af5a7b6b921395031bb32a4a69ad96403990c7d746334f845b68281ede38"} Nov 24 16:55:28 crc kubenswrapper[4768]: I1124 16:55:28.804576 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tgh5z" podStartSLOduration=2.733870167 podStartE2EDuration="57.804542442s" podCreationTimestamp="2025-11-24 16:54:31 +0000 UTC" firstStartedPulling="2025-11-24 16:54:33.306174143 +0000 UTC m=+154.553142801" lastFinishedPulling="2025-11-24 16:55:28.376846378 +0000 UTC m=+209.623815076" observedRunningTime="2025-11-24 16:55:28.801010709 +0000 UTC m=+210.047979367" watchObservedRunningTime="2025-11-24 16:55:28.804542442 +0000 UTC m=+210.051511100" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.136552 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.136875 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.318874 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.319149 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.319224 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:30 crc kubenswrapper[4768]: I1124 16:55:30.388746 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:31 crc kubenswrapper[4768]: I1124 16:55:31.695983 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:55:31 crc kubenswrapper[4768]: I1124 16:55:31.696578 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:55:31 crc kubenswrapper[4768]: I1124 16:55:31.765472 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:55:34 crc kubenswrapper[4768]: I1124 16:55:34.893482 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:55:34 crc kubenswrapper[4768]: I1124 16:55:34.894176 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:55:34 crc kubenswrapper[4768]: I1124 16:55:34.894244 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:55:34 crc kubenswrapper[4768]: I1124 16:55:34.895019 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 16:55:34 crc kubenswrapper[4768]: I1124 16:55:34.895182 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760" gracePeriod=600 Nov 24 16:55:35 crc kubenswrapper[4768]: I1124 16:55:35.833624 4768 generic.go:334] "Generic (PLEG): container finished" podID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerID="0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6" exitCode=0 Nov 24 16:55:35 crc kubenswrapper[4768]: I1124 16:55:35.833727 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerDied","Data":"0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6"} Nov 24 16:55:35 crc kubenswrapper[4768]: I1124 16:55:35.837225 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760" exitCode=0 Nov 24 16:55:35 crc kubenswrapper[4768]: I1124 16:55:35.837253 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760"} Nov 24 16:55:35 crc kubenswrapper[4768]: I1124 16:55:35.837281 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157"} Nov 24 16:55:36 crc kubenswrapper[4768]: I1124 16:55:36.844314 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerStarted","Data":"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2"} Nov 24 16:55:37 crc kubenswrapper[4768]: I1124 16:55:37.850259 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerID="3584c529c8fcc07e7b53defa56668f50759cc19b5a2135105369dc2a26e35f03" exitCode=0 Nov 24 16:55:37 crc kubenswrapper[4768]: I1124 16:55:37.850310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerDied","Data":"3584c529c8fcc07e7b53defa56668f50759cc19b5a2135105369dc2a26e35f03"} Nov 24 16:55:37 crc kubenswrapper[4768]: I1124 16:55:37.876488 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cw5r9" podStartSLOduration=3.794769462 podStartE2EDuration="1m8.876471129s" podCreationTimestamp="2025-11-24 16:54:29 +0000 UTC" firstStartedPulling="2025-11-24 16:54:31.200464698 +0000 UTC m=+152.447433346" lastFinishedPulling="2025-11-24 16:55:36.282166345 +0000 UTC m=+217.529135013" observedRunningTime="2025-11-24 16:55:36.871309106 +0000 UTC m=+218.118277764" watchObservedRunningTime="2025-11-24 16:55:37.876471129 +0000 UTC m=+219.123439787" Nov 24 16:55:38 crc kubenswrapper[4768]: I1124 16:55:38.857386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerStarted","Data":"5a4e3d1dd8b3156c7d92f2251b7573d493033462d30b1f9c337722e720736dc2"} Nov 24 16:55:39 crc kubenswrapper[4768]: I1124 16:55:39.784181 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:55:39 crc kubenswrapper[4768]: I1124 16:55:39.784588 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:55:39 crc kubenswrapper[4768]: I1124 16:55:39.839792 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:55:39 crc kubenswrapper[4768]: I1124 16:55:39.857241 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m9k7h" podStartSLOduration=3.788599865 podStartE2EDuration="1m10.857220781s" podCreationTimestamp="2025-11-24 16:54:29 +0000 UTC" firstStartedPulling="2025-11-24 16:54:31.209143055 +0000 UTC m=+152.456111713" lastFinishedPulling="2025-11-24 16:55:38.277763971 +0000 UTC m=+219.524732629" observedRunningTime="2025-11-24 16:55:38.879649448 +0000 UTC m=+220.126618106" watchObservedRunningTime="2025-11-24 16:55:39.857220781 +0000 UTC m=+221.104189439" Nov 24 16:55:39 crc kubenswrapper[4768]: I1124 16:55:39.863502 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerStarted","Data":"43567c6694b0b048e225d1cf90e59508912913f31cf1a07ce6894966414a1709"} Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.173582 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.174096 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.199254 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.218815 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.378261 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.873121 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerID="43567c6694b0b048e225d1cf90e59508912913f31cf1a07ce6894966414a1709" exitCode=0 Nov 24 16:55:40 crc kubenswrapper[4768]: I1124 16:55:40.873779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerDied","Data":"43567c6694b0b048e225d1cf90e59508912913f31cf1a07ce6894966414a1709"} Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.738203 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.880397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerStarted","Data":"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170"} Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.882520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerStarted","Data":"c5d875a2a103e4fbb759a158af7d474083706bc7d01034d074bf11919bdc9667"} Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.883979 4768 generic.go:334] "Generic (PLEG): container finished" podID="82e36f11-c00f-4548-b15d-a13a98dae032" containerID="8f5faa9be1d07fe57030fc7f98198f68ea89f7ad06fd914ce7dfd15b053f6b05" exitCode=0 Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.884028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerDied","Data":"8f5faa9be1d07fe57030fc7f98198f68ea89f7ad06fd914ce7dfd15b053f6b05"} Nov 24 16:55:41 crc kubenswrapper[4768]: I1124 16:55:41.918578 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8vkl6" podStartSLOduration=2.944675771 podStartE2EDuration="1m9.918555913s" podCreationTimestamp="2025-11-24 16:54:32 +0000 UTC" firstStartedPulling="2025-11-24 16:54:34.328684644 +0000 UTC m=+155.575653302" lastFinishedPulling="2025-11-24 16:55:41.302564786 +0000 UTC m=+222.549533444" observedRunningTime="2025-11-24 16:55:41.918220112 +0000 UTC m=+223.165188770" watchObservedRunningTime="2025-11-24 16:55:41.918555913 +0000 UTC m=+223.165524571" Nov 24 16:55:42 crc kubenswrapper[4768]: I1124 16:55:42.895982 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerID="df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170" exitCode=0 Nov 24 16:55:42 crc kubenswrapper[4768]: I1124 16:55:42.896071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerDied","Data":"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170"} Nov 24 16:55:42 crc kubenswrapper[4768]: I1124 16:55:42.978413 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:55:42 crc kubenswrapper[4768]: I1124 16:55:42.978659 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-skz5c" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="registry-server" containerID="cri-o://4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd" gracePeriod=2 Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.160676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.160968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.394941 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.496452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content\") pod \"416b2c6c-bf32-4f82-98d6-75abc55f3118\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.496497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities\") pod \"416b2c6c-bf32-4f82-98d6-75abc55f3118\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.496584 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zq75\" (UniqueName: \"kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75\") pod \"416b2c6c-bf32-4f82-98d6-75abc55f3118\" (UID: \"416b2c6c-bf32-4f82-98d6-75abc55f3118\") " Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.498913 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities" (OuterVolumeSpecName: "utilities") pod "416b2c6c-bf32-4f82-98d6-75abc55f3118" (UID: "416b2c6c-bf32-4f82-98d6-75abc55f3118"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.502705 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75" (OuterVolumeSpecName: "kube-api-access-9zq75") pod "416b2c6c-bf32-4f82-98d6-75abc55f3118" (UID: "416b2c6c-bf32-4f82-98d6-75abc55f3118"). InnerVolumeSpecName "kube-api-access-9zq75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.549281 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "416b2c6c-bf32-4f82-98d6-75abc55f3118" (UID: "416b2c6c-bf32-4f82-98d6-75abc55f3118"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.597975 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.598006 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/416b2c6c-bf32-4f82-98d6-75abc55f3118-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.598017 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zq75\" (UniqueName: \"kubernetes.io/projected/416b2c6c-bf32-4f82-98d6-75abc55f3118-kube-api-access-9zq75\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.903402 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerStarted","Data":"73b3a7fe45110995037e59bfee6428d9925692178b9135388796a749ee7f8ddc"} Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.905887 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerStarted","Data":"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226"} Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.908833 4768 generic.go:334] "Generic (PLEG): container finished" podID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerID="4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd" exitCode=0 Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.908873 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-skz5c" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.908884 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerDied","Data":"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd"} Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.909231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-skz5c" event={"ID":"416b2c6c-bf32-4f82-98d6-75abc55f3118","Type":"ContainerDied","Data":"0e37e03592c0979ea92d5428251fda9a170eb3697b9c9c5667433aebbbcecc84"} Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.909280 4768 scope.go:117] "RemoveContainer" containerID="4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.920636 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzc9b" podStartSLOduration=3.41022141 podStartE2EDuration="1m12.920619644s" podCreationTimestamp="2025-11-24 16:54:31 +0000 UTC" firstStartedPulling="2025-11-24 16:54:33.313652163 +0000 UTC m=+154.560620821" lastFinishedPulling="2025-11-24 16:55:42.824050397 +0000 UTC m=+224.071019055" observedRunningTime="2025-11-24 16:55:43.919149367 +0000 UTC m=+225.166118025" watchObservedRunningTime="2025-11-24 16:55:43.920619644 +0000 UTC m=+225.167588302" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.925672 4768 scope.go:117] "RemoveContainer" containerID="982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.931301 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.938965 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-skz5c"] Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.950221 4768 scope.go:117] "RemoveContainer" containerID="8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.952696 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vddcd" podStartSLOduration=3.031078422 podStartE2EDuration="1m10.952683246s" podCreationTimestamp="2025-11-24 16:54:33 +0000 UTC" firstStartedPulling="2025-11-24 16:54:35.354803104 +0000 UTC m=+156.601771762" lastFinishedPulling="2025-11-24 16:55:43.276407938 +0000 UTC m=+224.523376586" observedRunningTime="2025-11-24 16:55:43.952549642 +0000 UTC m=+225.199518300" watchObservedRunningTime="2025-11-24 16:55:43.952683246 +0000 UTC m=+225.199651904" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.967483 4768 scope.go:117] "RemoveContainer" containerID="4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd" Nov 24 16:55:43 crc kubenswrapper[4768]: E1124 16:55:43.968227 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd\": container with ID starting with 4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd not found: ID does not exist" containerID="4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.968274 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd"} err="failed to get container status \"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd\": rpc error: code = NotFound desc = could not find container \"4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd\": container with ID starting with 4e6284fa861d425e17db7b6c7dd2ea1215800dca92816ef8f9e08bc1a4fa06dd not found: ID does not exist" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.968311 4768 scope.go:117] "RemoveContainer" containerID="982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7" Nov 24 16:55:43 crc kubenswrapper[4768]: E1124 16:55:43.968867 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7\": container with ID starting with 982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7 not found: ID does not exist" containerID="982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.968897 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7"} err="failed to get container status \"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7\": rpc error: code = NotFound desc = could not find container \"982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7\": container with ID starting with 982e3ea5733c5a5183f340023b637e66a56f23ac602d8087d4e681b46be7e3a7 not found: ID does not exist" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.968924 4768 scope.go:117] "RemoveContainer" containerID="8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050" Nov 24 16:55:43 crc kubenswrapper[4768]: E1124 16:55:43.969285 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050\": container with ID starting with 8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050 not found: ID does not exist" containerID="8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050" Nov 24 16:55:43 crc kubenswrapper[4768]: I1124 16:55:43.969306 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050"} err="failed to get container status \"8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050\": rpc error: code = NotFound desc = could not find container \"8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050\": container with ID starting with 8a244221b0a953abfbb37c515e34627c70bc7871754331b0f0fe403c214c1050 not found: ID does not exist" Nov 24 16:55:44 crc kubenswrapper[4768]: I1124 16:55:44.227086 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8vkl6" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="registry-server" probeResult="failure" output=< Nov 24 16:55:44 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 16:55:44 crc kubenswrapper[4768]: > Nov 24 16:55:45 crc kubenswrapper[4768]: I1124 16:55:45.587331 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" path="/var/lib/kubelet/pods/416b2c6c-bf32-4f82-98d6-75abc55f3118/volumes" Nov 24 16:55:49 crc kubenswrapper[4768]: I1124 16:55:49.830967 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:55:50 crc kubenswrapper[4768]: I1124 16:55:50.301572 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:52 crc kubenswrapper[4768]: I1124 16:55:52.118530 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:52 crc kubenswrapper[4768]: I1124 16:55:52.118640 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:52 crc kubenswrapper[4768]: I1124 16:55:52.166909 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:52 crc kubenswrapper[4768]: I1124 16:55:52.997602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.206091 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.264515 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.375629 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.375991 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m9k7h" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="registry-server" containerID="cri-o://5a4e3d1dd8b3156c7d92f2251b7573d493033462d30b1f9c337722e720736dc2" gracePeriod=2 Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.569064 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.569364 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:53 crc kubenswrapper[4768]: I1124 16:55:53.615871 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:54 crc kubenswrapper[4768]: I1124 16:55:54.005201 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:54 crc kubenswrapper[4768]: I1124 16:55:54.974664 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerID="5a4e3d1dd8b3156c7d92f2251b7573d493033462d30b1f9c337722e720736dc2" exitCode=0 Nov 24 16:55:54 crc kubenswrapper[4768]: I1124 16:55:54.974836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerDied","Data":"5a4e3d1dd8b3156c7d92f2251b7573d493033462d30b1f9c337722e720736dc2"} Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.093455 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.176499 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.204059 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities\") pod \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.204313 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt7j5\" (UniqueName: \"kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5\") pod \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.204485 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content\") pod \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\" (UID: \"bd7fb843-d66e-46c2-9eed-e8525f79b7ed\") " Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.204950 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities" (OuterVolumeSpecName: "utilities") pod "bd7fb843-d66e-46c2-9eed-e8525f79b7ed" (UID: "bd7fb843-d66e-46c2-9eed-e8525f79b7ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.221563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5" (OuterVolumeSpecName: "kube-api-access-nt7j5") pod "bd7fb843-d66e-46c2-9eed-e8525f79b7ed" (UID: "bd7fb843-d66e-46c2-9eed-e8525f79b7ed"). InnerVolumeSpecName "kube-api-access-nt7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.254947 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd7fb843-d66e-46c2-9eed-e8525f79b7ed" (UID: "bd7fb843-d66e-46c2-9eed-e8525f79b7ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.306385 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.306552 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt7j5\" (UniqueName: \"kubernetes.io/projected/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-kube-api-access-nt7j5\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.306617 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7fb843-d66e-46c2-9eed-e8525f79b7ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.981916 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9k7h" event={"ID":"bd7fb843-d66e-46c2-9eed-e8525f79b7ed","Type":"ContainerDied","Data":"50d76ad47bcdeb2a641200aa7ce91bfc20671b7297b767a1bc9813fc838affcf"} Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.982295 4768 scope.go:117] "RemoveContainer" containerID="5a4e3d1dd8b3156c7d92f2251b7573d493033462d30b1f9c337722e720736dc2" Nov 24 16:55:55 crc kubenswrapper[4768]: I1124 16:55:55.981999 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9k7h" Nov 24 16:55:56 crc kubenswrapper[4768]: I1124 16:55:56.000882 4768 scope.go:117] "RemoveContainer" containerID="3584c529c8fcc07e7b53defa56668f50759cc19b5a2135105369dc2a26e35f03" Nov 24 16:55:56 crc kubenswrapper[4768]: I1124 16:55:56.001931 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:55:56 crc kubenswrapper[4768]: I1124 16:55:56.005240 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m9k7h"] Nov 24 16:55:56 crc kubenswrapper[4768]: I1124 16:55:56.017298 4768 scope.go:117] "RemoveContainer" containerID="980bbb656bc36ce2edb99c00240895909441209bafdafce9245e39fe449ef1a0" Nov 24 16:55:56 crc kubenswrapper[4768]: I1124 16:55:56.988291 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vddcd" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="registry-server" containerID="cri-o://a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226" gracePeriod=2 Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.573517 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.574059 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzc9b" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="registry-server" containerID="cri-o://73b3a7fe45110995037e59bfee6428d9925692178b9135388796a749ee7f8ddc" gracePeriod=2 Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.587072 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" path="/var/lib/kubelet/pods/bd7fb843-d66e-46c2-9eed-e8525f79b7ed/volumes" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.896669 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.940151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities\") pod \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.941178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities" (OuterVolumeSpecName: "utilities") pod "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" (UID: "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.941373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5vg\" (UniqueName: \"kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg\") pod \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.942090 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content\") pod \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\" (UID: \"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5\") " Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.942610 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.950870 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg" (OuterVolumeSpecName: "kube-api-access-bp5vg") pod "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" (UID: "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5"). InnerVolumeSpecName "kube-api-access-bp5vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.995462 4768 generic.go:334] "Generic (PLEG): container finished" podID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerID="a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226" exitCode=0 Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.995512 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vddcd" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.995536 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerDied","Data":"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226"} Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.995575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vddcd" event={"ID":"b0d479d0-7e74-4a1a-8da3-7a42b959b0a5","Type":"ContainerDied","Data":"3c641f0029f0e3115c1cc7fea91afc8efec966442ccc1d01ed1a1216c10969d3"} Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.995609 4768 scope.go:117] "RemoveContainer" containerID="a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226" Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.998230 4768 generic.go:334] "Generic (PLEG): container finished" podID="82e36f11-c00f-4548-b15d-a13a98dae032" containerID="73b3a7fe45110995037e59bfee6428d9925692178b9135388796a749ee7f8ddc" exitCode=0 Nov 24 16:55:57 crc kubenswrapper[4768]: I1124 16:55:57.998258 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerDied","Data":"73b3a7fe45110995037e59bfee6428d9925692178b9135388796a749ee7f8ddc"} Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.018298 4768 scope.go:117] "RemoveContainer" containerID="df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.018472 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.031689 4768 scope.go:117] "RemoveContainer" containerID="9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.044255 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5vg\" (UniqueName: \"kubernetes.io/projected/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-kube-api-access-bp5vg\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.048126 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" (UID: "b0d479d0-7e74-4a1a-8da3-7a42b959b0a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.053537 4768 scope.go:117] "RemoveContainer" containerID="a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226" Nov 24 16:55:58 crc kubenswrapper[4768]: E1124 16:55:58.053959 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226\": container with ID starting with a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226 not found: ID does not exist" containerID="a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.053999 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226"} err="failed to get container status \"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226\": rpc error: code = NotFound desc = could not find container \"a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226\": container with ID starting with a47ac516685b11c6bff00d3f6ef28f46955b54342607ff2bc340f71c0cb85226 not found: ID does not exist" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.054021 4768 scope.go:117] "RemoveContainer" containerID="df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170" Nov 24 16:55:58 crc kubenswrapper[4768]: E1124 16:55:58.054287 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170\": container with ID starting with df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170 not found: ID does not exist" containerID="df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.054315 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170"} err="failed to get container status \"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170\": rpc error: code = NotFound desc = could not find container \"df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170\": container with ID starting with df482cab2c9ad9956fb7c3d993ba1ee2b05acb9d5f24857c81a22b8351fe8170 not found: ID does not exist" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.054333 4768 scope.go:117] "RemoveContainer" containerID="9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc" Nov 24 16:55:58 crc kubenswrapper[4768]: E1124 16:55:58.055140 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc\": container with ID starting with 9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc not found: ID does not exist" containerID="9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.055205 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc"} err="failed to get container status \"9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc\": rpc error: code = NotFound desc = could not find container \"9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc\": container with ID starting with 9e8b98b6fadd34a9feda22161cef12291bd6c5552a0bbd1315c0719524e9fbbc not found: ID does not exist" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.145254 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content\") pod \"82e36f11-c00f-4548-b15d-a13a98dae032\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.145308 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5s5m\" (UniqueName: \"kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m\") pod \"82e36f11-c00f-4548-b15d-a13a98dae032\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.145338 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities\") pod \"82e36f11-c00f-4548-b15d-a13a98dae032\" (UID: \"82e36f11-c00f-4548-b15d-a13a98dae032\") " Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.145491 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.146288 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities" (OuterVolumeSpecName: "utilities") pod "82e36f11-c00f-4548-b15d-a13a98dae032" (UID: "82e36f11-c00f-4548-b15d-a13a98dae032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.151296 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m" (OuterVolumeSpecName: "kube-api-access-z5s5m") pod "82e36f11-c00f-4548-b15d-a13a98dae032" (UID: "82e36f11-c00f-4548-b15d-a13a98dae032"). InnerVolumeSpecName "kube-api-access-z5s5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.159967 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82e36f11-c00f-4548-b15d-a13a98dae032" (UID: "82e36f11-c00f-4548-b15d-a13a98dae032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.247100 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.247127 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5s5m\" (UniqueName: \"kubernetes.io/projected/82e36f11-c00f-4548-b15d-a13a98dae032-kube-api-access-z5s5m\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.247139 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e36f11-c00f-4548-b15d-a13a98dae032-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.326000 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:55:58 crc kubenswrapper[4768]: I1124 16:55:58.334949 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vddcd"] Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.006978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzc9b" event={"ID":"82e36f11-c00f-4548-b15d-a13a98dae032","Type":"ContainerDied","Data":"93ac5974c9951680955616d6e6c4a0ef4e54624e9fad920d00a458c18d60dbbf"} Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.007037 4768 scope.go:117] "RemoveContainer" containerID="73b3a7fe45110995037e59bfee6428d9925692178b9135388796a749ee7f8ddc" Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.007161 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzc9b" Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.023287 4768 scope.go:117] "RemoveContainer" containerID="8f5faa9be1d07fe57030fc7f98198f68ea89f7ad06fd914ce7dfd15b053f6b05" Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.041979 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.044605 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzc9b"] Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.047907 4768 scope.go:117] "RemoveContainer" containerID="8ad761d9123b09bd6acfc8585b89eb0ca20143a7b4705f993093074b87212d0d" Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.589826 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" path="/var/lib/kubelet/pods/82e36f11-c00f-4548-b15d-a13a98dae032/volumes" Nov 24 16:55:59 crc kubenswrapper[4768]: I1124 16:55:59.590560 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" path="/var/lib/kubelet/pods/b0d479d0-7e74-4a1a-8da3-7a42b959b0a5/volumes" Nov 24 16:56:01 crc kubenswrapper[4768]: I1124 16:56:01.339672 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.519866 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.520866 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llvqz" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="registry-server" containerID="cri-o://e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f" gracePeriod=30 Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.534877 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.535212 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cw5r9" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="registry-server" containerID="cri-o://42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2" gracePeriod=30 Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.537756 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.537977 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" containerID="cri-o://b842ad2a0550c3e6ff4623d31bdb892981a9ed84a024a19e256b0570542f11f7" gracePeriod=30 Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.548007 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.548327 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tgh5z" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="registry-server" containerID="cri-o://c355af5a7b6b921395031bb32a4a69ad96403990c7d746334f845b68281ede38" gracePeriod=30 Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.558040 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.558332 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8vkl6" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="registry-server" containerID="cri-o://c5d875a2a103e4fbb759a158af7d474083706bc7d01034d074bf11919bdc9667" gracePeriod=30 Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566146 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zvk7"] Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566490 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566511 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566526 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566536 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566545 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566553 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566567 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566574 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566588 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566595 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566607 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566614 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566628 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fb6ca0-4b81-490e-9463-87a297babdda" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566637 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fb6ca0-4b81-490e-9463-87a297babdda" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566648 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566657 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566667 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566675 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566686 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566694 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566707 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395b04f1-7a44-4f3a-bcde-42bfc7a50e43" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566716 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="395b04f1-7a44-4f3a-bcde-42bfc7a50e43" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566731 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566738 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566752 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566759 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="extract-utilities" Nov 24 16:56:02 crc kubenswrapper[4768]: E1124 16:56:02.566768 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566776 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="extract-content" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566890 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="395b04f1-7a44-4f3a-bcde-42bfc7a50e43" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566900 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e36f11-c00f-4548-b15d-a13a98dae032" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566913 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fb6ca0-4b81-490e-9463-87a297babdda" containerName="pruner" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566924 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d479d0-7e74-4a1a-8da3-7a42b959b0a5" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566939 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="416b2c6c-bf32-4f82-98d6-75abc55f3118" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.566949 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7fb843-d66e-46c2-9eed-e8525f79b7ed" containerName="registry-server" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.567505 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.568768 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zvk7"] Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.622658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.622829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrrn\" (UniqueName: \"kubernetes.io/projected/453d22cb-b151-4afd-8116-28d85514ca2c-kube-api-access-6lrrn\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.622870 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.734437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrrn\" (UniqueName: \"kubernetes.io/projected/453d22cb-b151-4afd-8116-28d85514ca2c-kube-api-access-6lrrn\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.734713 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.734769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.738384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.747985 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/453d22cb-b151-4afd-8116-28d85514ca2c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.753398 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrrn\" (UniqueName: \"kubernetes.io/projected/453d22cb-b151-4afd-8116-28d85514ca2c-kube-api-access-6lrrn\") pod \"marketplace-operator-79b997595-5zvk7\" (UID: \"453d22cb-b151-4afd-8116-28d85514ca2c\") " pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:02 crc kubenswrapper[4768]: I1124 16:56:02.889791 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.012776 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.049059 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.060089 4768 generic.go:334] "Generic (PLEG): container finished" podID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerID="b842ad2a0550c3e6ff4623d31bdb892981a9ed84a024a19e256b0570542f11f7" exitCode=0 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.060155 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" event={"ID":"6393ad56-dadc-453f-b4f6-b7a6b52304e1","Type":"ContainerDied","Data":"b842ad2a0550c3e6ff4623d31bdb892981a9ed84a024a19e256b0570542f11f7"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.091750 4768 generic.go:334] "Generic (PLEG): container finished" podID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerID="e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f" exitCode=0 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.091842 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerDied","Data":"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.091884 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvqz" event={"ID":"069473c1-4cad-470f-a20e-2352a5bd6ff4","Type":"ContainerDied","Data":"ba7ed9a29d70098c18b6d8465d1b21a76ddbcc857a11d5d144b7b599af163b4c"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.091887 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvqz" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.091906 4768 scope.go:117] "RemoveContainer" containerID="e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.093596 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.100144 4768 generic.go:334] "Generic (PLEG): container finished" podID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerID="42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2" exitCode=0 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.100477 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerDied","Data":"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.100628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cw5r9" event={"ID":"291b46fc-d3a5-457b-a85a-306f37d45ecc","Type":"ContainerDied","Data":"4ce5fb87caf96bfad9a7d7d4bc00498490651429d06115783369ae79393c831b"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.101537 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cw5r9" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.113666 4768 scope.go:117] "RemoveContainer" containerID="111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.118238 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.118561 4768 generic.go:334] "Generic (PLEG): container finished" podID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerID="c355af5a7b6b921395031bb32a4a69ad96403990c7d746334f845b68281ede38" exitCode=0 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.118626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerDied","Data":"c355af5a7b6b921395031bb32a4a69ad96403990c7d746334f845b68281ede38"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156372 4768 generic.go:334] "Generic (PLEG): container finished" podID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerID="c5d875a2a103e4fbb759a158af7d474083706bc7d01034d074bf11919bdc9667" exitCode=0 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156424 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8vkl6" event={"ID":"ff262f7e-1ff3-47a7-8346-3e91a6d3583d","Type":"ContainerDied","Data":"c5d875a2a103e4fbb759a158af7d474083706bc7d01034d074bf11919bdc9667"} Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156518 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55bsx\" (UniqueName: \"kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx\") pod \"069473c1-4cad-470f-a20e-2352a5bd6ff4\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156539 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8vkl6" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities\") pod \"069473c1-4cad-470f-a20e-2352a5bd6ff4\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156612 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzbpm\" (UniqueName: \"kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm\") pod \"291b46fc-d3a5-457b-a85a-306f37d45ecc\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156644 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities\") pod \"291b46fc-d3a5-457b-a85a-306f37d45ecc\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156781 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content\") pod \"069473c1-4cad-470f-a20e-2352a5bd6ff4\" (UID: \"069473c1-4cad-470f-a20e-2352a5bd6ff4\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.156812 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content\") pod \"291b46fc-d3a5-457b-a85a-306f37d45ecc\" (UID: \"291b46fc-d3a5-457b-a85a-306f37d45ecc\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.158433 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities" (OuterVolumeSpecName: "utilities") pod "291b46fc-d3a5-457b-a85a-306f37d45ecc" (UID: "291b46fc-d3a5-457b-a85a-306f37d45ecc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.158992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities" (OuterVolumeSpecName: "utilities") pod "069473c1-4cad-470f-a20e-2352a5bd6ff4" (UID: "069473c1-4cad-470f-a20e-2352a5bd6ff4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.163135 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx" (OuterVolumeSpecName: "kube-api-access-55bsx") pod "069473c1-4cad-470f-a20e-2352a5bd6ff4" (UID: "069473c1-4cad-470f-a20e-2352a5bd6ff4"). InnerVolumeSpecName "kube-api-access-55bsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.163252 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm" (OuterVolumeSpecName: "kube-api-access-fzbpm") pod "291b46fc-d3a5-457b-a85a-306f37d45ecc" (UID: "291b46fc-d3a5-457b-a85a-306f37d45ecc"). InnerVolumeSpecName "kube-api-access-fzbpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.169397 4768 scope.go:117] "RemoveContainer" containerID="e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.193129 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.200873 4768 scope.go:117] "RemoveContainer" containerID="e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f" Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.202382 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f\": container with ID starting with e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f not found: ID does not exist" containerID="e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.202437 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f"} err="failed to get container status \"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f\": rpc error: code = NotFound desc = could not find container \"e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f\": container with ID starting with e80e4c33eb34f53e342addc8e366c1d8e8f605c6a81bd4f657ddb7ad4d31b94f not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.202482 4768 scope.go:117] "RemoveContainer" containerID="111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3" Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.203282 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3\": container with ID starting with 111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3 not found: ID does not exist" containerID="111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.203306 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3"} err="failed to get container status \"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3\": rpc error: code = NotFound desc = could not find container \"111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3\": container with ID starting with 111d51c88f89c94352489f41fd73bc4429c8a4154c2bfbb6bd056ff1d44ad0b3 not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.203322 4768 scope.go:117] "RemoveContainer" containerID="e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.203511 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5zvk7"] Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.204040 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a\": container with ID starting with e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a not found: ID does not exist" containerID="e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.204232 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a"} err="failed to get container status \"e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a\": rpc error: code = NotFound desc = could not find container \"e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a\": container with ID starting with e7c4ba1e14e759004d61c12b6dae8d05ad9a3eef160a68966fa8a3491448d29a not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.204514 4768 scope.go:117] "RemoveContainer" containerID="42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.223165 4768 scope.go:117] "RemoveContainer" containerID="0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6" Nov 24 16:56:03 crc kubenswrapper[4768]: W1124 16:56:03.223185 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod453d22cb_b151_4afd_8116_28d85514ca2c.slice/crio-431593cd00308e1e90ea7acd181278599041ae365995835cbaf3b3194c326335 WatchSource:0}: Error finding container 431593cd00308e1e90ea7acd181278599041ae365995835cbaf3b3194c326335: Status 404 returned error can't find the container with id 431593cd00308e1e90ea7acd181278599041ae365995835cbaf3b3194c326335 Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.225686 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "069473c1-4cad-470f-a20e-2352a5bd6ff4" (UID: "069473c1-4cad-470f-a20e-2352a5bd6ff4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.254312 4768 scope.go:117] "RemoveContainer" containerID="450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258456 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6khs6\" (UniqueName: \"kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6\") pod \"b2069e94-bdf1-4d31-9294-e19c0393e478\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258516 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content\") pod \"b2069e94-bdf1-4d31-9294-e19c0393e478\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics\") pod \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities\") pod \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities\") pod \"b2069e94-bdf1-4d31-9294-e19c0393e478\" (UID: \"b2069e94-bdf1-4d31-9294-e19c0393e478\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9htb\" (UniqueName: \"kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb\") pod \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258903 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca\") pod \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.258948 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j64x7\" (UniqueName: \"kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7\") pod \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\" (UID: \"6393ad56-dadc-453f-b4f6-b7a6b52304e1\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.259094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content\") pod \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\" (UID: \"ff262f7e-1ff3-47a7-8346-3e91a6d3583d\") " Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.259423 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.259946 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities" (OuterVolumeSpecName: "utilities") pod "ff262f7e-1ff3-47a7-8346-3e91a6d3583d" (UID: "ff262f7e-1ff3-47a7-8346-3e91a6d3583d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.260340 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6393ad56-dadc-453f-b4f6-b7a6b52304e1" (UID: "6393ad56-dadc-453f-b4f6-b7a6b52304e1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.262318 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities" (OuterVolumeSpecName: "utilities") pod "b2069e94-bdf1-4d31-9294-e19c0393e478" (UID: "b2069e94-bdf1-4d31-9294-e19c0393e478"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.262639 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6" (OuterVolumeSpecName: "kube-api-access-6khs6") pod "b2069e94-bdf1-4d31-9294-e19c0393e478" (UID: "b2069e94-bdf1-4d31-9294-e19c0393e478"). InnerVolumeSpecName "kube-api-access-6khs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.262852 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55bsx\" (UniqueName: \"kubernetes.io/projected/069473c1-4cad-470f-a20e-2352a5bd6ff4-kube-api-access-55bsx\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.262885 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069473c1-4cad-470f-a20e-2352a5bd6ff4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.262943 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzbpm\" (UniqueName: \"kubernetes.io/projected/291b46fc-d3a5-457b-a85a-306f37d45ecc-kube-api-access-fzbpm\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.263109 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.264118 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7" (OuterVolumeSpecName: "kube-api-access-j64x7") pod "6393ad56-dadc-453f-b4f6-b7a6b52304e1" (UID: "6393ad56-dadc-453f-b4f6-b7a6b52304e1"). InnerVolumeSpecName "kube-api-access-j64x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.267074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "291b46fc-d3a5-457b-a85a-306f37d45ecc" (UID: "291b46fc-d3a5-457b-a85a-306f37d45ecc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.275449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6393ad56-dadc-453f-b4f6-b7a6b52304e1" (UID: "6393ad56-dadc-453f-b4f6-b7a6b52304e1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.275808 4768 scope.go:117] "RemoveContainer" containerID="42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2" Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.276242 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2\": container with ID starting with 42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2 not found: ID does not exist" containerID="42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.276296 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2"} err="failed to get container status \"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2\": rpc error: code = NotFound desc = could not find container \"42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2\": container with ID starting with 42df1ea40f0e7d78dc0c228c106e219e7d162c5470ad277b8794de2ad45af8b2 not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.276324 4768 scope.go:117] "RemoveContainer" containerID="0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6" Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.276906 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6\": container with ID starting with 0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6 not found: ID does not exist" containerID="0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.276932 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6"} err="failed to get container status \"0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6\": rpc error: code = NotFound desc = could not find container \"0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6\": container with ID starting with 0ad2ed6b73c5b8f533c169e9a27800bbdbae57fa9b79d1589407bf02b2f8d6c6 not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.276948 4768 scope.go:117] "RemoveContainer" containerID="450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.277661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb" (OuterVolumeSpecName: "kube-api-access-t9htb") pod "ff262f7e-1ff3-47a7-8346-3e91a6d3583d" (UID: "ff262f7e-1ff3-47a7-8346-3e91a6d3583d"). InnerVolumeSpecName "kube-api-access-t9htb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: E1124 16:56:03.278137 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc\": container with ID starting with 450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc not found: ID does not exist" containerID="450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.278192 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc"} err="failed to get container status \"450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc\": rpc error: code = NotFound desc = could not find container \"450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc\": container with ID starting with 450dbdb2c515d7519559c41cbbeca7e1c82eec5f4002876133297b3a7e016ccc not found: ID does not exist" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.278227 4768 scope.go:117] "RemoveContainer" containerID="c5d875a2a103e4fbb759a158af7d474083706bc7d01034d074bf11919bdc9667" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.280810 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2069e94-bdf1-4d31-9294-e19c0393e478" (UID: "b2069e94-bdf1-4d31-9294-e19c0393e478"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.297667 4768 scope.go:117] "RemoveContainer" containerID="43567c6694b0b048e225d1cf90e59508912913f31cf1a07ce6894966414a1709" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.334827 4768 scope.go:117] "RemoveContainer" containerID="de8a9240181b92fe68cbd33911314a26578dc256a83a3bead1dde78c12dffcd5" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.352212 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff262f7e-1ff3-47a7-8346-3e91a6d3583d" (UID: "ff262f7e-1ff3-47a7-8346-3e91a6d3583d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.363987 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/291b46fc-d3a5-457b-a85a-306f37d45ecc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364073 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6khs6\" (UniqueName: \"kubernetes.io/projected/b2069e94-bdf1-4d31-9294-e19c0393e478-kube-api-access-6khs6\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364091 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364101 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364113 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364121 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2069e94-bdf1-4d31-9294-e19c0393e478-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364130 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9htb\" (UniqueName: \"kubernetes.io/projected/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-kube-api-access-t9htb\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364139 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6393ad56-dadc-453f-b4f6-b7a6b52304e1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364146 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j64x7\" (UniqueName: \"kubernetes.io/projected/6393ad56-dadc-453f-b4f6-b7a6b52304e1-kube-api-access-j64x7\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.364155 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff262f7e-1ff3-47a7-8346-3e91a6d3583d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.427839 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.432746 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cw5r9"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.443118 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.446070 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llvqz"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.481289 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.489975 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8vkl6"] Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.589904 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" path="/var/lib/kubelet/pods/069473c1-4cad-470f-a20e-2352a5bd6ff4/volumes" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.590857 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" path="/var/lib/kubelet/pods/291b46fc-d3a5-457b-a85a-306f37d45ecc/volumes" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.591718 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" path="/var/lib/kubelet/pods/ff262f7e-1ff3-47a7-8346-3e91a6d3583d/volumes" Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.919317 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-rhk4d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 16:56:03 crc kubenswrapper[4768]: I1124 16:56:03.919669 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.165873 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tgh5z" event={"ID":"b2069e94-bdf1-4d31-9294-e19c0393e478","Type":"ContainerDied","Data":"01839dedb38bf58e3060a0b21f375a4be7388f461b9710d0605003935a2693dc"} Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.165924 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tgh5z" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.165938 4768 scope.go:117] "RemoveContainer" containerID="c355af5a7b6b921395031bb32a4a69ad96403990c7d746334f845b68281ede38" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.168031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" event={"ID":"453d22cb-b151-4afd-8116-28d85514ca2c","Type":"ContainerStarted","Data":"0f8c61c2ca61f87ff7a7d866f7d4a8169e2bba8f07e010f658eac260c969259e"} Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.168063 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" event={"ID":"453d22cb-b151-4afd-8116-28d85514ca2c","Type":"ContainerStarted","Data":"431593cd00308e1e90ea7acd181278599041ae365995835cbaf3b3194c326335"} Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.168299 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.175827 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.177528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" event={"ID":"6393ad56-dadc-453f-b4f6-b7a6b52304e1","Type":"ContainerDied","Data":"defafbe864f92768e2fdf900b52602860bb7dd9a01406cae9d63fec0e9359c3c"} Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.177696 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rhk4d" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.185856 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.191565 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tgh5z"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.192913 4768 scope.go:117] "RemoveContainer" containerID="27685d0747c7dd483022545f1ad8cb40d18a69229dc57dcac287b8852c7c3f88" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.213517 4768 scope.go:117] "RemoveContainer" containerID="50ceec3c95a09825bec9edd60974fe2c7c87b5c357cdbc9fe8311c82a29b9e61" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.216698 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5zvk7" podStartSLOduration=2.216687124 podStartE2EDuration="2.216687124s" podCreationTimestamp="2025-11-24 16:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:56:04.215108644 +0000 UTC m=+245.462077322" watchObservedRunningTime="2025-11-24 16:56:04.216687124 +0000 UTC m=+245.463655782" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.236419 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.238989 4768 scope.go:117] "RemoveContainer" containerID="b842ad2a0550c3e6ff4623d31bdb892981a9ed84a024a19e256b0570542f11f7" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.248597 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rhk4d"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780588 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dsv6c"] Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780787 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780799 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780810 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780816 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780826 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780832 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780841 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780846 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780855 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780860 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780870 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780876 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780882 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780887 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780894 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780900 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780909 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780914 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780922 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780930 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780935 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780941 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="extract-utilities" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780949 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780954 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: E1124 16:56:04.780961 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.780966 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="extract-content" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781047 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff262f7e-1ff3-47a7-8346-3e91a6d3583d" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781059 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781067 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="069473c1-4cad-470f-a20e-2352a5bd6ff4" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781076 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" containerName="marketplace-operator" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781082 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="291b46fc-d3a5-457b-a85a-306f37d45ecc" containerName="registry-server" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.781759 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.784155 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.803144 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsv6c"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.884665 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-utilities\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.884743 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhl9\" (UniqueName: \"kubernetes.io/projected/41aba52e-e435-4061-88d5-30b6d8b78806-kube-api-access-fzhl9\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.884765 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-catalog-content\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.980509 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-47fqh"] Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.981412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.983780 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.985562 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhl9\" (UniqueName: \"kubernetes.io/projected/41aba52e-e435-4061-88d5-30b6d8b78806-kube-api-access-fzhl9\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.985589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-catalog-content\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.985636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-utilities\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.986473 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-utilities\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.986652 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aba52e-e435-4061-88d5-30b6d8b78806-catalog-content\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:04 crc kubenswrapper[4768]: I1124 16:56:04.992766 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47fqh"] Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.006244 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhl9\" (UniqueName: \"kubernetes.io/projected/41aba52e-e435-4061-88d5-30b6d8b78806-kube-api-access-fzhl9\") pod \"community-operators-dsv6c\" (UID: \"41aba52e-e435-4061-88d5-30b6d8b78806\") " pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.086676 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-utilities\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.087063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmwb\" (UniqueName: \"kubernetes.io/projected/9e71cc43-12fc-4315-992f-af825fe58680-kube-api-access-bzmwb\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.087107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-catalog-content\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.096701 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.188621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-catalog-content\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.188722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-utilities\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.188771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmwb\" (UniqueName: \"kubernetes.io/projected/9e71cc43-12fc-4315-992f-af825fe58680-kube-api-access-bzmwb\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.189793 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-catalog-content\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.190061 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e71cc43-12fc-4315-992f-af825fe58680-utilities\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.210556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmwb\" (UniqueName: \"kubernetes.io/projected/9e71cc43-12fc-4315-992f-af825fe58680-kube-api-access-bzmwb\") pod \"redhat-marketplace-47fqh\" (UID: \"9e71cc43-12fc-4315-992f-af825fe58680\") " pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.296119 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.502791 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dsv6c"] Nov 24 16:56:05 crc kubenswrapper[4768]: W1124 16:56:05.512022 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41aba52e_e435_4061_88d5_30b6d8b78806.slice/crio-dfb92b12763712b3dfbd80a4500faea200f0831d340b681442693a88f714f2fe WatchSource:0}: Error finding container dfb92b12763712b3dfbd80a4500faea200f0831d340b681442693a88f714f2fe: Status 404 returned error can't find the container with id dfb92b12763712b3dfbd80a4500faea200f0831d340b681442693a88f714f2fe Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.595970 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6393ad56-dadc-453f-b4f6-b7a6b52304e1" path="/var/lib/kubelet/pods/6393ad56-dadc-453f-b4f6-b7a6b52304e1/volumes" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.596578 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2069e94-bdf1-4d31-9294-e19c0393e478" path="/var/lib/kubelet/pods/b2069e94-bdf1-4d31-9294-e19c0393e478/volumes" Nov 24 16:56:05 crc kubenswrapper[4768]: I1124 16:56:05.677834 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-47fqh"] Nov 24 16:56:05 crc kubenswrapper[4768]: W1124 16:56:05.748129 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e71cc43_12fc_4315_992f_af825fe58680.slice/crio-12acb3ac019960f3e388b28bae7194335c5a95168187fb839ef3d4420185eab8 WatchSource:0}: Error finding container 12acb3ac019960f3e388b28bae7194335c5a95168187fb839ef3d4420185eab8: Status 404 returned error can't find the container with id 12acb3ac019960f3e388b28bae7194335c5a95168187fb839ef3d4420185eab8 Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.214708 4768 generic.go:334] "Generic (PLEG): container finished" podID="41aba52e-e435-4061-88d5-30b6d8b78806" containerID="604d0ed17242f5eec4382e2101dbb358ecaa9c567acb9c566fe83a7eac8114db" exitCode=0 Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.214767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsv6c" event={"ID":"41aba52e-e435-4061-88d5-30b6d8b78806","Type":"ContainerDied","Data":"604d0ed17242f5eec4382e2101dbb358ecaa9c567acb9c566fe83a7eac8114db"} Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.214835 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsv6c" event={"ID":"41aba52e-e435-4061-88d5-30b6d8b78806","Type":"ContainerStarted","Data":"dfb92b12763712b3dfbd80a4500faea200f0831d340b681442693a88f714f2fe"} Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.216867 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e71cc43-12fc-4315-992f-af825fe58680" containerID="6ec9f506f9b957a5c4867469fa8aeb0d2552affdce5fa864f20330d00296c1b6" exitCode=0 Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.216952 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47fqh" event={"ID":"9e71cc43-12fc-4315-992f-af825fe58680","Type":"ContainerDied","Data":"6ec9f506f9b957a5c4867469fa8aeb0d2552affdce5fa864f20330d00296c1b6"} Nov 24 16:56:06 crc kubenswrapper[4768]: I1124 16:56:06.216975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47fqh" event={"ID":"9e71cc43-12fc-4315-992f-af825fe58680","Type":"ContainerStarted","Data":"12acb3ac019960f3e388b28bae7194335c5a95168187fb839ef3d4420185eab8"} Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.190027 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qjbs"] Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.191509 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.194735 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.201281 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qjbs"] Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.225569 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsv6c" event={"ID":"41aba52e-e435-4061-88d5-30b6d8b78806","Type":"ContainerStarted","Data":"3f323426faf2a596a3b11350d94bc555a2d44d26f59ec1a29e33287d76acca12"} Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.325957 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf97p\" (UniqueName: \"kubernetes.io/projected/3ec76654-6209-40eb-85dc-861ddae3c79f-kube-api-access-jf97p\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.326755 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-catalog-content\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.326809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-utilities\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.392021 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fz5jq"] Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.393025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.399475 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.404295 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fz5jq"] Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.428612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-catalog-content\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.428733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-utilities\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.428910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf97p\" (UniqueName: \"kubernetes.io/projected/3ec76654-6209-40eb-85dc-861ddae3c79f-kube-api-access-jf97p\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.429137 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-catalog-content\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.429749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec76654-6209-40eb-85dc-861ddae3c79f-utilities\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.455882 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf97p\" (UniqueName: \"kubernetes.io/projected/3ec76654-6209-40eb-85dc-861ddae3c79f-kube-api-access-jf97p\") pod \"redhat-operators-4qjbs\" (UID: \"3ec76654-6209-40eb-85dc-861ddae3c79f\") " pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.530541 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-utilities\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.530726 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-catalog-content\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.530822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rh9w\" (UniqueName: \"kubernetes.io/projected/eca00397-85e6-401b-b0a8-011a3307b0ee-kube-api-access-2rh9w\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.588217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.631929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-utilities\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.631988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-catalog-content\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.632024 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rh9w\" (UniqueName: \"kubernetes.io/projected/eca00397-85e6-401b-b0a8-011a3307b0ee-kube-api-access-2rh9w\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.633768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-utilities\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.633985 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca00397-85e6-401b-b0a8-011a3307b0ee-catalog-content\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.662046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rh9w\" (UniqueName: \"kubernetes.io/projected/eca00397-85e6-401b-b0a8-011a3307b0ee-kube-api-access-2rh9w\") pod \"certified-operators-fz5jq\" (UID: \"eca00397-85e6-401b-b0a8-011a3307b0ee\") " pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.749190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:07 crc kubenswrapper[4768]: I1124 16:56:07.960933 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fz5jq"] Nov 24 16:56:07 crc kubenswrapper[4768]: W1124 16:56:07.970865 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeca00397_85e6_401b_b0a8_011a3307b0ee.slice/crio-785f26ec49524e967b3f2781c422e21117f7bc8feeec45233a5e1990d24b39d6 WatchSource:0}: Error finding container 785f26ec49524e967b3f2781c422e21117f7bc8feeec45233a5e1990d24b39d6: Status 404 returned error can't find the container with id 785f26ec49524e967b3f2781c422e21117f7bc8feeec45233a5e1990d24b39d6 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.011191 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qjbs"] Nov 24 16:56:08 crc kubenswrapper[4768]: W1124 16:56:08.019568 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec76654_6209_40eb_85dc_861ddae3c79f.slice/crio-29cdb17aa54fcb4205cebc972af6727734dc76ed921ba328f91d8a22c3de87d5 WatchSource:0}: Error finding container 29cdb17aa54fcb4205cebc972af6727734dc76ed921ba328f91d8a22c3de87d5: Status 404 returned error can't find the container with id 29cdb17aa54fcb4205cebc972af6727734dc76ed921ba328f91d8a22c3de87d5 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.233143 4768 generic.go:334] "Generic (PLEG): container finished" podID="3ec76654-6209-40eb-85dc-861ddae3c79f" containerID="98cd7084a244b3fdbce61dfa0acfbb8b4fe82b19089fc22d87e8f2a1228ee52b" exitCode=0 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.233255 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qjbs" event={"ID":"3ec76654-6209-40eb-85dc-861ddae3c79f","Type":"ContainerDied","Data":"98cd7084a244b3fdbce61dfa0acfbb8b4fe82b19089fc22d87e8f2a1228ee52b"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.233607 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qjbs" event={"ID":"3ec76654-6209-40eb-85dc-861ddae3c79f","Type":"ContainerStarted","Data":"29cdb17aa54fcb4205cebc972af6727734dc76ed921ba328f91d8a22c3de87d5"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.235558 4768 generic.go:334] "Generic (PLEG): container finished" podID="eca00397-85e6-401b-b0a8-011a3307b0ee" containerID="b5a4ea3ff8abe8ddb8098ddc050853369ea5eb6a6292e54a9963d81831f3126b" exitCode=0 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.235626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz5jq" event={"ID":"eca00397-85e6-401b-b0a8-011a3307b0ee","Type":"ContainerDied","Data":"b5a4ea3ff8abe8ddb8098ddc050853369ea5eb6a6292e54a9963d81831f3126b"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.235692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz5jq" event={"ID":"eca00397-85e6-401b-b0a8-011a3307b0ee","Type":"ContainerStarted","Data":"785f26ec49524e967b3f2781c422e21117f7bc8feeec45233a5e1990d24b39d6"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.242931 4768 generic.go:334] "Generic (PLEG): container finished" podID="41aba52e-e435-4061-88d5-30b6d8b78806" containerID="3f323426faf2a596a3b11350d94bc555a2d44d26f59ec1a29e33287d76acca12" exitCode=0 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.243123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsv6c" event={"ID":"41aba52e-e435-4061-88d5-30b6d8b78806","Type":"ContainerDied","Data":"3f323426faf2a596a3b11350d94bc555a2d44d26f59ec1a29e33287d76acca12"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.243399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dsv6c" event={"ID":"41aba52e-e435-4061-88d5-30b6d8b78806","Type":"ContainerStarted","Data":"16fc57d1e2bae1e9b05c59e0e2513ad9e0186db845654cbc62250b4bb4283abf"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.250402 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e71cc43-12fc-4315-992f-af825fe58680" containerID="915f5a7001500152fd418616ad4ec91ffaaa5a10eb9d6619ebde4a77930843a6" exitCode=0 Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.250440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47fqh" event={"ID":"9e71cc43-12fc-4315-992f-af825fe58680","Type":"ContainerDied","Data":"915f5a7001500152fd418616ad4ec91ffaaa5a10eb9d6619ebde4a77930843a6"} Nov 24 16:56:08 crc kubenswrapper[4768]: I1124 16:56:08.330503 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dsv6c" podStartSLOduration=2.775271648 podStartE2EDuration="4.330470704s" podCreationTimestamp="2025-11-24 16:56:04 +0000 UTC" firstStartedPulling="2025-11-24 16:56:06.21760354 +0000 UTC m=+247.464572208" lastFinishedPulling="2025-11-24 16:56:07.772802606 +0000 UTC m=+249.019771264" observedRunningTime="2025-11-24 16:56:08.329112531 +0000 UTC m=+249.576081199" watchObservedRunningTime="2025-11-24 16:56:08.330470704 +0000 UTC m=+249.577439352" Nov 24 16:56:09 crc kubenswrapper[4768]: I1124 16:56:09.258987 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-47fqh" event={"ID":"9e71cc43-12fc-4315-992f-af825fe58680","Type":"ContainerStarted","Data":"b42e148196eea58059a586eac029171acac0d95a96c652b395368b38d407c2c7"} Nov 24 16:56:09 crc kubenswrapper[4768]: I1124 16:56:09.284217 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-47fqh" podStartSLOduration=2.787370774 podStartE2EDuration="5.284185137s" podCreationTimestamp="2025-11-24 16:56:04 +0000 UTC" firstStartedPulling="2025-11-24 16:56:06.218956273 +0000 UTC m=+247.465924932" lastFinishedPulling="2025-11-24 16:56:08.715770637 +0000 UTC m=+249.962739295" observedRunningTime="2025-11-24 16:56:09.280150978 +0000 UTC m=+250.527119636" watchObservedRunningTime="2025-11-24 16:56:09.284185137 +0000 UTC m=+250.531153795" Nov 24 16:56:10 crc kubenswrapper[4768]: I1124 16:56:10.269974 4768 generic.go:334] "Generic (PLEG): container finished" podID="eca00397-85e6-401b-b0a8-011a3307b0ee" containerID="e8c4a2a780efaafd140d52577e5f408eb2a8bac02dd88e90a763afb9e20a1c30" exitCode=0 Nov 24 16:56:10 crc kubenswrapper[4768]: I1124 16:56:10.270063 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz5jq" event={"ID":"eca00397-85e6-401b-b0a8-011a3307b0ee","Type":"ContainerDied","Data":"e8c4a2a780efaafd140d52577e5f408eb2a8bac02dd88e90a763afb9e20a1c30"} Nov 24 16:56:10 crc kubenswrapper[4768]: I1124 16:56:10.274187 4768 generic.go:334] "Generic (PLEG): container finished" podID="3ec76654-6209-40eb-85dc-861ddae3c79f" containerID="1b07fe6b381e0b6346ffabb62fd6b7c449147086fc99e06dfa5d795e05bc655d" exitCode=0 Nov 24 16:56:10 crc kubenswrapper[4768]: I1124 16:56:10.274900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qjbs" event={"ID":"3ec76654-6209-40eb-85dc-861ddae3c79f","Type":"ContainerDied","Data":"1b07fe6b381e0b6346ffabb62fd6b7c449147086fc99e06dfa5d795e05bc655d"} Nov 24 16:56:11 crc kubenswrapper[4768]: I1124 16:56:11.284936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz5jq" event={"ID":"eca00397-85e6-401b-b0a8-011a3307b0ee","Type":"ContainerStarted","Data":"0b745aa1620c2d8cacb6081c747514eaa9c6c30b21f4daee87771bbb9d519dfd"} Nov 24 16:56:11 crc kubenswrapper[4768]: I1124 16:56:11.289015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qjbs" event={"ID":"3ec76654-6209-40eb-85dc-861ddae3c79f","Type":"ContainerStarted","Data":"b9ec43f64d3902325ef841f4a61c75e80db7dae4fb76e81819a4a0616fd42be6"} Nov 24 16:56:11 crc kubenswrapper[4768]: I1124 16:56:11.329218 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fz5jq" podStartSLOduration=1.920937286 podStartE2EDuration="4.329188837s" podCreationTimestamp="2025-11-24 16:56:07 +0000 UTC" firstStartedPulling="2025-11-24 16:56:08.238717099 +0000 UTC m=+249.485685757" lastFinishedPulling="2025-11-24 16:56:10.64696864 +0000 UTC m=+251.893937308" observedRunningTime="2025-11-24 16:56:11.308946642 +0000 UTC m=+252.555915300" watchObservedRunningTime="2025-11-24 16:56:11.329188837 +0000 UTC m=+252.576157495" Nov 24 16:56:11 crc kubenswrapper[4768]: I1124 16:56:11.330488 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qjbs" podStartSLOduration=1.65218615 podStartE2EDuration="4.330480639s" podCreationTimestamp="2025-11-24 16:56:07 +0000 UTC" firstStartedPulling="2025-11-24 16:56:08.234688811 +0000 UTC m=+249.481657469" lastFinishedPulling="2025-11-24 16:56:10.9129833 +0000 UTC m=+252.159951958" observedRunningTime="2025-11-24 16:56:11.327339778 +0000 UTC m=+252.574308456" watchObservedRunningTime="2025-11-24 16:56:11.330480639 +0000 UTC m=+252.577449297" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.097510 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.098321 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.145489 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.297058 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.298179 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.393520 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dsv6c" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.394070 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:15 crc kubenswrapper[4768]: I1124 16:56:15.463516 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-47fqh" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.589888 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.589943 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.642617 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.750614 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.750661 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:17 crc kubenswrapper[4768]: I1124 16:56:17.793094 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:18 crc kubenswrapper[4768]: I1124 16:56:18.398785 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qjbs" Nov 24 16:56:18 crc kubenswrapper[4768]: I1124 16:56:18.406568 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fz5jq" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.377094 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" containerID="cri-o://8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2" gracePeriod=15 Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.802924 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.850706 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f687b986-8rlsm"] Nov 24 16:56:26 crc kubenswrapper[4768]: E1124 16:56:26.850925 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.850938 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.851040 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a58543-2a12-4886-93ce-8d25432a2166" containerName="oauth-openshift" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.851429 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.866633 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f687b986-8rlsm"] Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.935912 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.935976 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936018 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6mtv\" (UniqueName: \"kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936040 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936116 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936158 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936178 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936225 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936280 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936313 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936387 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error\") pod \"86a58543-2a12-4886-93ce-8d25432a2166\" (UID: \"86a58543-2a12-4886-93ce-8d25432a2166\") " Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936599 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-policies\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-login\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-error\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936680 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-dir\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936702 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936722 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936804 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntb26\" (UniqueName: \"kubernetes.io/projected/7df30ff6-8662-4c2c-80b9-d466b4b42d61-kube-api-access-ntb26\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936836 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936870 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-session\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936915 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.936937 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.938129 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.938424 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.938687 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.939393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.939794 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.948276 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv" (OuterVolumeSpecName: "kube-api-access-d6mtv") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "kube-api-access-d6mtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.948309 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.948711 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.948927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.949242 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.949499 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.949705 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.955525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:26 crc kubenswrapper[4768]: I1124 16:56:26.956866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "86a58543-2a12-4886-93ce-8d25432a2166" (UID: "86a58543-2a12-4886-93ce-8d25432a2166"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.037985 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-dir\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-dir\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038185 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038214 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038259 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntb26\" (UniqueName: \"kubernetes.io/projected/7df30ff6-8662-4c2c-80b9-d466b4b42d61-kube-api-access-ntb26\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038295 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038376 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-session\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038441 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-policies\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-login\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-error\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038556 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038570 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038584 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6mtv\" (UniqueName: \"kubernetes.io/projected/86a58543-2a12-4886-93ce-8d25432a2166-kube-api-access-d6mtv\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038595 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038609 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038624 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038637 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038650 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86a58543-2a12-4886-93ce-8d25432a2166-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038662 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038674 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038687 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038701 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038712 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.038725 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86a58543-2a12-4886-93ce-8d25432a2166-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.042319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.043708 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.044836 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-error\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.044922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-session\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-audit-policies\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045841 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.045951 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.046996 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.048948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7df30ff6-8662-4c2c-80b9-d466b4b42d61-v4-0-config-user-template-login\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.061878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntb26\" (UniqueName: \"kubernetes.io/projected/7df30ff6-8662-4c2c-80b9-d466b4b42d61-kube-api-access-ntb26\") pod \"oauth-openshift-7f687b986-8rlsm\" (UID: \"7df30ff6-8662-4c2c-80b9-d466b4b42d61\") " pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.174556 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.411261 4768 generic.go:334] "Generic (PLEG): container finished" podID="86a58543-2a12-4886-93ce-8d25432a2166" containerID="8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2" exitCode=0 Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.411403 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.411397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" event={"ID":"86a58543-2a12-4886-93ce-8d25432a2166","Type":"ContainerDied","Data":"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2"} Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.411903 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4dgcz" event={"ID":"86a58543-2a12-4886-93ce-8d25432a2166","Type":"ContainerDied","Data":"db7b499585059eba4ffca93478ba6d2e81b64acd9bc1185bb7972149d595c737"} Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.411947 4768 scope.go:117] "RemoveContainer" containerID="8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.451005 4768 scope.go:117] "RemoveContainer" containerID="8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2" Nov 24 16:56:27 crc kubenswrapper[4768]: E1124 16:56:27.452355 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2\": container with ID starting with 8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2 not found: ID does not exist" containerID="8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.452387 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2"} err="failed to get container status \"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2\": rpc error: code = NotFound desc = could not find container \"8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2\": container with ID starting with 8cdb7fa2569f15197b94653da351520e55440ca4811385ac8b609185692a46c2 not found: ID does not exist" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.453850 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.458467 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4dgcz"] Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.594030 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a58543-2a12-4886-93ce-8d25432a2166" path="/var/lib/kubelet/pods/86a58543-2a12-4886-93ce-8d25432a2166/volumes" Nov 24 16:56:27 crc kubenswrapper[4768]: I1124 16:56:27.662734 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f687b986-8rlsm"] Nov 24 16:56:28 crc kubenswrapper[4768]: I1124 16:56:28.422052 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" event={"ID":"7df30ff6-8662-4c2c-80b9-d466b4b42d61","Type":"ContainerStarted","Data":"2cedb2e60c1329e17c967c9cb1d4d863f01ab9c548712f70fb95bb1202a07a2a"} Nov 24 16:56:28 crc kubenswrapper[4768]: I1124 16:56:28.422790 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:56:28 crc kubenswrapper[4768]: I1124 16:56:28.422914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" event={"ID":"7df30ff6-8662-4c2c-80b9-d466b4b42d61","Type":"ContainerStarted","Data":"9359de990a4d2282258f06e33fc21a98ef6e37901d83eca45e3dbddccbd46bdf"} Nov 24 16:56:28 crc kubenswrapper[4768]: I1124 16:56:28.461890 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" podStartSLOduration=27.461869875 podStartE2EDuration="27.461869875s" podCreationTimestamp="2025-11-24 16:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:56:28.457452694 +0000 UTC m=+269.704421352" watchObservedRunningTime="2025-11-24 16:56:28.461869875 +0000 UTC m=+269.708838533" Nov 24 16:56:28 crc kubenswrapper[4768]: I1124 16:56:28.663756 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f687b986-8rlsm" Nov 24 16:58:04 crc kubenswrapper[4768]: I1124 16:58:04.893407 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:58:04 crc kubenswrapper[4768]: I1124 16:58:04.894007 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:58:34 crc kubenswrapper[4768]: I1124 16:58:34.893390 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:58:34 crc kubenswrapper[4768]: I1124 16:58:34.895739 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:59:04 crc kubenswrapper[4768]: I1124 16:59:04.897770 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 16:59:04 crc kubenswrapper[4768]: I1124 16:59:04.898634 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 16:59:04 crc kubenswrapper[4768]: I1124 16:59:04.898699 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 16:59:04 crc kubenswrapper[4768]: I1124 16:59:04.899588 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 16:59:04 crc kubenswrapper[4768]: I1124 16:59:04.899654 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157" gracePeriod=600 Nov 24 16:59:05 crc kubenswrapper[4768]: I1124 16:59:05.441706 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157" exitCode=0 Nov 24 16:59:05 crc kubenswrapper[4768]: I1124 16:59:05.441782 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157"} Nov 24 16:59:05 crc kubenswrapper[4768]: I1124 16:59:05.442118 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6"} Nov 24 16:59:05 crc kubenswrapper[4768]: I1124 16:59:05.442164 4768 scope.go:117] "RemoveContainer" containerID="d629b6c3f9a9408d1f527aca764ddcc1dbc43662c0110d6565da3a04e73ef760" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.180042 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hc4zm"] Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.181643 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.196552 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hc4zm"] Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-tls\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362511 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-trusted-ca\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-bound-sa-token\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362593 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-certificates\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362616 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae1a7523-c36a-4b0e-aeae-362e926d3799-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362663 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqv6p\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-kube-api-access-qqv6p\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.362706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae1a7523-c36a-4b0e-aeae-362e926d3799-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.386737 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae1a7523-c36a-4b0e-aeae-362e926d3799-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-tls\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-trusted-ca\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-bound-sa-token\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-certificates\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464768 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae1a7523-c36a-4b0e-aeae-362e926d3799-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.464843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqv6p\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-kube-api-access-qqv6p\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.465233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ae1a7523-c36a-4b0e-aeae-362e926d3799-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.466155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-certificates\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.466742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae1a7523-c36a-4b0e-aeae-362e926d3799-trusted-ca\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.472972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ae1a7523-c36a-4b0e-aeae-362e926d3799-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.472981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-registry-tls\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.488139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqv6p\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-kube-api-access-qqv6p\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.488269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae1a7523-c36a-4b0e-aeae-362e926d3799-bound-sa-token\") pod \"image-registry-66df7c8f76-hc4zm\" (UID: \"ae1a7523-c36a-4b0e-aeae-362e926d3799\") " pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.564189 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:49 crc kubenswrapper[4768]: I1124 16:59:49.974601 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hc4zm"] Nov 24 16:59:50 crc kubenswrapper[4768]: I1124 16:59:50.767790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" event={"ID":"ae1a7523-c36a-4b0e-aeae-362e926d3799","Type":"ContainerStarted","Data":"8f61c3c7d4647748ff6eefc88787593033f46780bd487af304cdbee1276fb18d"} Nov 24 16:59:50 crc kubenswrapper[4768]: I1124 16:59:50.768160 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" event={"ID":"ae1a7523-c36a-4b0e-aeae-362e926d3799","Type":"ContainerStarted","Data":"14430795db65a758485246a04115cfdc280774f5f16ab6b940d1c865f0f067f7"} Nov 24 16:59:50 crc kubenswrapper[4768]: I1124 16:59:50.768196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 16:59:50 crc kubenswrapper[4768]: I1124 16:59:50.802389 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" podStartSLOduration=1.802341984 podStartE2EDuration="1.802341984s" podCreationTimestamp="2025-11-24 16:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 16:59:50.797690076 +0000 UTC m=+472.044658774" watchObservedRunningTime="2025-11-24 16:59:50.802341984 +0000 UTC m=+472.049310682" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.131684 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt"] Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.133148 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.136267 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.136573 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.141294 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt"] Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.244791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6jcm\" (UniqueName: \"kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.244857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.244914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.346646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6jcm\" (UniqueName: \"kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.346723 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.346760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.347753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.352222 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.362841 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6jcm\" (UniqueName: \"kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm\") pod \"collect-profiles-29400060-gp4kt\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.456303 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.637116 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt"] Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.828131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" event={"ID":"095f5a55-c57f-40e6-8465-c30d81b324de","Type":"ContainerStarted","Data":"8e0cf7958ff7c082914b05919f4b02aa79ef18e882f543f0b150c902677a1b47"} Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.828452 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" event={"ID":"095f5a55-c57f-40e6-8465-c30d81b324de","Type":"ContainerStarted","Data":"66082cb8274dbc030ee62b42d834a6da2425708277c63f3c39d0badb325c94d8"} Nov 24 17:00:00 crc kubenswrapper[4768]: I1124 17:00:00.844414 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" podStartSLOduration=0.84439506 podStartE2EDuration="844.39506ms" podCreationTimestamp="2025-11-24 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:00:00.842775863 +0000 UTC m=+482.089744531" watchObservedRunningTime="2025-11-24 17:00:00.84439506 +0000 UTC m=+482.091363718" Nov 24 17:00:01 crc kubenswrapper[4768]: I1124 17:00:01.835202 4768 generic.go:334] "Generic (PLEG): container finished" podID="095f5a55-c57f-40e6-8465-c30d81b324de" containerID="8e0cf7958ff7c082914b05919f4b02aa79ef18e882f543f0b150c902677a1b47" exitCode=0 Nov 24 17:00:01 crc kubenswrapper[4768]: I1124 17:00:01.835251 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" event={"ID":"095f5a55-c57f-40e6-8465-c30d81b324de","Type":"ContainerDied","Data":"8e0cf7958ff7c082914b05919f4b02aa79ef18e882f543f0b150c902677a1b47"} Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.094759 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.286245 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6jcm\" (UniqueName: \"kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm\") pod \"095f5a55-c57f-40e6-8465-c30d81b324de\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.286301 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume\") pod \"095f5a55-c57f-40e6-8465-c30d81b324de\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.286382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume\") pod \"095f5a55-c57f-40e6-8465-c30d81b324de\" (UID: \"095f5a55-c57f-40e6-8465-c30d81b324de\") " Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.287276 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume" (OuterVolumeSpecName: "config-volume") pod "095f5a55-c57f-40e6-8465-c30d81b324de" (UID: "095f5a55-c57f-40e6-8465-c30d81b324de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.293114 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "095f5a55-c57f-40e6-8465-c30d81b324de" (UID: "095f5a55-c57f-40e6-8465-c30d81b324de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.294495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm" (OuterVolumeSpecName: "kube-api-access-x6jcm") pod "095f5a55-c57f-40e6-8465-c30d81b324de" (UID: "095f5a55-c57f-40e6-8465-c30d81b324de"). InnerVolumeSpecName "kube-api-access-x6jcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.388197 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6jcm\" (UniqueName: \"kubernetes.io/projected/095f5a55-c57f-40e6-8465-c30d81b324de-kube-api-access-x6jcm\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.388242 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/095f5a55-c57f-40e6-8465-c30d81b324de-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.388253 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/095f5a55-c57f-40e6-8465-c30d81b324de-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.848462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" event={"ID":"095f5a55-c57f-40e6-8465-c30d81b324de","Type":"ContainerDied","Data":"66082cb8274dbc030ee62b42d834a6da2425708277c63f3c39d0badb325c94d8"} Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.848511 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66082cb8274dbc030ee62b42d834a6da2425708277c63f3c39d0badb325c94d8" Nov 24 17:00:03 crc kubenswrapper[4768]: I1124 17:00:03.848577 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400060-gp4kt" Nov 24 17:00:09 crc kubenswrapper[4768]: I1124 17:00:09.573381 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hc4zm" Nov 24 17:00:09 crc kubenswrapper[4768]: I1124 17:00:09.657250 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 17:00:34 crc kubenswrapper[4768]: I1124 17:00:34.730128 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" podUID="38d3cf53-6a1c-4009-9b0a-0638aae38656" containerName="registry" containerID="cri-o://9a3c92150f0a02a172deff3b9bf9821fb05d24a601c8bb39569128e7909d3c49" gracePeriod=30 Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.067121 4768 generic.go:334] "Generic (PLEG): container finished" podID="38d3cf53-6a1c-4009-9b0a-0638aae38656" containerID="9a3c92150f0a02a172deff3b9bf9821fb05d24a601c8bb39569128e7909d3c49" exitCode=0 Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.067213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" event={"ID":"38d3cf53-6a1c-4009-9b0a-0638aae38656","Type":"ContainerDied","Data":"9a3c92150f0a02a172deff3b9bf9821fb05d24a601c8bb39569128e7909d3c49"} Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.171547 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253648 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmth\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253682 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253710 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.253965 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.254029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.254053 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates\") pod \"38d3cf53-6a1c-4009-9b0a-0638aae38656\" (UID: \"38d3cf53-6a1c-4009-9b0a-0638aae38656\") " Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.255504 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.255721 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.259520 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.265751 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.265776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth" (OuterVolumeSpecName: "kube-api-access-xdmth") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "kube-api-access-xdmth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.265992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.273160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.276084 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "38d3cf53-6a1c-4009-9b0a-0638aae38656" (UID: "38d3cf53-6a1c-4009-9b0a-0638aae38656"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355512 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmth\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-kube-api-access-xdmth\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355543 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/38d3cf53-6a1c-4009-9b0a-0638aae38656-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355555 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355566 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38d3cf53-6a1c-4009-9b0a-0638aae38656-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355577 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355588 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/38d3cf53-6a1c-4009-9b0a-0638aae38656-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:35 crc kubenswrapper[4768]: I1124 17:00:35.355599 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/38d3cf53-6a1c-4009-9b0a-0638aae38656-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 17:00:36 crc kubenswrapper[4768]: I1124 17:00:36.076014 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" event={"ID":"38d3cf53-6a1c-4009-9b0a-0638aae38656","Type":"ContainerDied","Data":"8f8522fe62687af201b90503264ff900f3f0681a25686a7a7abb4e5d32c0c9ae"} Nov 24 17:00:36 crc kubenswrapper[4768]: I1124 17:00:36.076427 4768 scope.go:117] "RemoveContainer" containerID="9a3c92150f0a02a172deff3b9bf9821fb05d24a601c8bb39569128e7909d3c49" Nov 24 17:00:36 crc kubenswrapper[4768]: I1124 17:00:36.076111 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mgmbb" Nov 24 17:00:36 crc kubenswrapper[4768]: I1124 17:00:36.110193 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 17:00:36 crc kubenswrapper[4768]: I1124 17:00:36.116045 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mgmbb"] Nov 24 17:00:37 crc kubenswrapper[4768]: I1124 17:00:37.588574 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d3cf53-6a1c-4009-9b0a-0638aae38656" path="/var/lib/kubelet/pods/38d3cf53-6a1c-4009-9b0a-0638aae38656/volumes" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.165833 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vcfg5"] Nov 24 17:01:18 crc kubenswrapper[4768]: E1124 17:01:18.166739 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095f5a55-c57f-40e6-8465-c30d81b324de" containerName="collect-profiles" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.166756 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="095f5a55-c57f-40e6-8465-c30d81b324de" containerName="collect-profiles" Nov 24 17:01:18 crc kubenswrapper[4768]: E1124 17:01:18.166776 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d3cf53-6a1c-4009-9b0a-0638aae38656" containerName="registry" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.166785 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d3cf53-6a1c-4009-9b0a-0638aae38656" containerName="registry" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.166920 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="095f5a55-c57f-40e6-8465-c30d81b324de" containerName="collect-profiles" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.166935 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d3cf53-6a1c-4009-9b0a-0638aae38656" containerName="registry" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.167424 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.170032 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.170520 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.170611 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-v7dzp" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.178861 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbcgj"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.179754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hbcgj" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.182393 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-484zp" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.186335 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vcfg5"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.189305 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbcgj"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.202675 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kd7w2"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.203617 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.205822 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qpdkt" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.214266 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kd7w2"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.280277 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5z4\" (UniqueName: \"kubernetes.io/projected/6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b-kube-api-access-rr5z4\") pod \"cert-manager-webhook-5655c58dd6-kd7w2\" (UID: \"6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.280338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4rjt\" (UniqueName: \"kubernetes.io/projected/406ba9bc-fe9f-4e90-be27-c7947c0049cd-kube-api-access-k4rjt\") pod \"cert-manager-5b446d88c5-hbcgj\" (UID: \"406ba9bc-fe9f-4e90-be27-c7947c0049cd\") " pod="cert-manager/cert-manager-5b446d88c5-hbcgj" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.280489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xx2\" (UniqueName: \"kubernetes.io/projected/553c5463-b1f6-410c-a1d6-032a7c57d30c-kube-api-access-l7xx2\") pod \"cert-manager-cainjector-7f985d654d-vcfg5\" (UID: \"553c5463-b1f6-410c-a1d6-032a7c57d30c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.381668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr5z4\" (UniqueName: \"kubernetes.io/projected/6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b-kube-api-access-rr5z4\") pod \"cert-manager-webhook-5655c58dd6-kd7w2\" (UID: \"6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.381711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4rjt\" (UniqueName: \"kubernetes.io/projected/406ba9bc-fe9f-4e90-be27-c7947c0049cd-kube-api-access-k4rjt\") pod \"cert-manager-5b446d88c5-hbcgj\" (UID: \"406ba9bc-fe9f-4e90-be27-c7947c0049cd\") " pod="cert-manager/cert-manager-5b446d88c5-hbcgj" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.381762 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7xx2\" (UniqueName: \"kubernetes.io/projected/553c5463-b1f6-410c-a1d6-032a7c57d30c-kube-api-access-l7xx2\") pod \"cert-manager-cainjector-7f985d654d-vcfg5\" (UID: \"553c5463-b1f6-410c-a1d6-032a7c57d30c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.401508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4rjt\" (UniqueName: \"kubernetes.io/projected/406ba9bc-fe9f-4e90-be27-c7947c0049cd-kube-api-access-k4rjt\") pod \"cert-manager-5b446d88c5-hbcgj\" (UID: \"406ba9bc-fe9f-4e90-be27-c7947c0049cd\") " pod="cert-manager/cert-manager-5b446d88c5-hbcgj" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.401966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7xx2\" (UniqueName: \"kubernetes.io/projected/553c5463-b1f6-410c-a1d6-032a7c57d30c-kube-api-access-l7xx2\") pod \"cert-manager-cainjector-7f985d654d-vcfg5\" (UID: \"553c5463-b1f6-410c-a1d6-032a7c57d30c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.406937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr5z4\" (UniqueName: \"kubernetes.io/projected/6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b-kube-api-access-rr5z4\") pod \"cert-manager-webhook-5655c58dd6-kd7w2\" (UID: \"6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.486072 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.499089 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hbcgj" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.520545 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.845953 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kd7w2"] Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.859390 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.911671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hbcgj"] Nov 24 17:01:18 crc kubenswrapper[4768]: W1124 17:01:18.915744 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406ba9bc_fe9f_4e90_be27_c7947c0049cd.slice/crio-7f433d82ef1f99461f81b3c7d8313c551b5f8319285775258240c14f9091818c WatchSource:0}: Error finding container 7f433d82ef1f99461f81b3c7d8313c551b5f8319285775258240c14f9091818c: Status 404 returned error can't find the container with id 7f433d82ef1f99461f81b3c7d8313c551b5f8319285775258240c14f9091818c Nov 24 17:01:18 crc kubenswrapper[4768]: I1124 17:01:18.916671 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vcfg5"] Nov 24 17:01:18 crc kubenswrapper[4768]: W1124 17:01:18.918034 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod553c5463_b1f6_410c_a1d6_032a7c57d30c.slice/crio-723f8e3937d392c0ab6f338d919a99a61cef5218e6989dd916a046922d073667 WatchSource:0}: Error finding container 723f8e3937d392c0ab6f338d919a99a61cef5218e6989dd916a046922d073667: Status 404 returned error can't find the container with id 723f8e3937d392c0ab6f338d919a99a61cef5218e6989dd916a046922d073667 Nov 24 17:01:19 crc kubenswrapper[4768]: I1124 17:01:19.340038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" event={"ID":"553c5463-b1f6-410c-a1d6-032a7c57d30c","Type":"ContainerStarted","Data":"723f8e3937d392c0ab6f338d919a99a61cef5218e6989dd916a046922d073667"} Nov 24 17:01:19 crc kubenswrapper[4768]: I1124 17:01:19.342529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" event={"ID":"6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b","Type":"ContainerStarted","Data":"f36a7d11be96dba9552e9117998d5aa6fed53068a19e472884a5f86f657e6e9f"} Nov 24 17:01:19 crc kubenswrapper[4768]: I1124 17:01:19.344069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hbcgj" event={"ID":"406ba9bc-fe9f-4e90-be27-c7947c0049cd","Type":"ContainerStarted","Data":"7f433d82ef1f99461f81b3c7d8313c551b5f8319285775258240c14f9091818c"} Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.380518 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" event={"ID":"6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b","Type":"ContainerStarted","Data":"211bac92be1f537138b84313fa82e8c415f2ef98d792fdab8674a48c712fbc17"} Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.381271 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.384023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hbcgj" event={"ID":"406ba9bc-fe9f-4e90-be27-c7947c0049cd","Type":"ContainerStarted","Data":"728bb1a2ffec587cb87e1b107ae813660741b7e4025f571e0661cc796d0b5142"} Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.386083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" event={"ID":"553c5463-b1f6-410c-a1d6-032a7c57d30c","Type":"ContainerStarted","Data":"611a8ba046c4d400340f73d59473a09f6597a403cb306a61d55ce9b08023f8a4"} Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.401732 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" podStartSLOduration=1.5663562739999999 podStartE2EDuration="7.401706212s" podCreationTimestamp="2025-11-24 17:01:18 +0000 UTC" firstStartedPulling="2025-11-24 17:01:18.859110005 +0000 UTC m=+560.106078673" lastFinishedPulling="2025-11-24 17:01:24.694459933 +0000 UTC m=+565.941428611" observedRunningTime="2025-11-24 17:01:25.401090165 +0000 UTC m=+566.648058843" watchObservedRunningTime="2025-11-24 17:01:25.401706212 +0000 UTC m=+566.648674900" Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.415954 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-vcfg5" podStartSLOduration=1.6531253320000001 podStartE2EDuration="7.415936059s" podCreationTimestamp="2025-11-24 17:01:18 +0000 UTC" firstStartedPulling="2025-11-24 17:01:18.92038882 +0000 UTC m=+560.167357478" lastFinishedPulling="2025-11-24 17:01:24.683199537 +0000 UTC m=+565.930168205" observedRunningTime="2025-11-24 17:01:25.414560491 +0000 UTC m=+566.661529159" watchObservedRunningTime="2025-11-24 17:01:25.415936059 +0000 UTC m=+566.662904747" Nov 24 17:01:25 crc kubenswrapper[4768]: I1124 17:01:25.433218 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hbcgj" podStartSLOduration=1.670352899 podStartE2EDuration="7.433194897s" podCreationTimestamp="2025-11-24 17:01:18 +0000 UTC" firstStartedPulling="2025-11-24 17:01:18.917917943 +0000 UTC m=+560.164886601" lastFinishedPulling="2025-11-24 17:01:24.680759921 +0000 UTC m=+565.927728599" observedRunningTime="2025-11-24 17:01:25.429930759 +0000 UTC m=+566.676899427" watchObservedRunningTime="2025-11-24 17:01:25.433194897 +0000 UTC m=+566.680163555" Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.909062 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-98lk9"] Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.909859 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-controller" containerID="cri-o://a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.909983 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-node" containerID="cri-o://5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.910010 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-acl-logging" containerID="cri-o://5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.909875 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="nbdb" containerID="cri-o://0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.910089 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="northd" containerID="cri-o://d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.910072 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="sbdb" containerID="cri-o://bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.910129 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" gracePeriod=30 Nov 24 17:01:28 crc kubenswrapper[4768]: I1124 17:01:28.955807 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" containerID="cri-o://a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" gracePeriod=30 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.205571 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/3.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.207716 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovn-acl-logging/0.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.208541 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovn-controller/0.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.209614 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.264572 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9g9pn"] Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.264902 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.264932 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.264948 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="nbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.264961 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="nbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.264979 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.264992 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265010 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-acl-logging" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265022 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-acl-logging" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265038 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-node" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265049 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-node" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265069 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265081 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265097 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="northd" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265108 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="northd" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265123 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="sbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265135 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="sbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265157 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265170 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265196 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kubecfg-setup" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265208 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kubecfg-setup" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265222 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265235 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265422 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="northd" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265444 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265457 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-node" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265469 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="nbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265485 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265505 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="sbdb" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265525 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovn-acl-logging" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265543 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265555 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265574 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265722 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265735 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.265765 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265777 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265923 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.265949 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerName="ovnkube-controller" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.268609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394551 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket" (OuterVolumeSpecName: "log-socket") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.395142 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.394593 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.395275 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdzd7\" (UniqueName: \"kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.395435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.395308 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396343 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396439 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396462 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396533 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396512 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396576 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396606 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log" (OuterVolumeSpecName: "node-log") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396637 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396665 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396708 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396728 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396715 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396812 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch\") pod \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\" (UID: \"17a83d5e-e5e7-422d-ab0e-647ca2eefb37\") " Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.396783 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash" (OuterVolumeSpecName: "host-slash") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-config\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-systemd-units\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397248 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-var-lib-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397585 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovn-node-metrics-cert\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397681 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-kubelet\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-log-socket\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-netd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.397955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-node-log\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398046 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-bin\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-ovn\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-env-overrides\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-script-lib\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398279 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-slash\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398326 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgr5\" (UniqueName: \"kubernetes.io/projected/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-kube-api-access-qtgr5\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398485 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398523 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-etc-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-systemd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-netns\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398691 4768 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398715 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398735 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398753 4768 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398774 4768 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398790 4768 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398806 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398825 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398842 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398858 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398875 4768 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398891 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398908 4768 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398926 4768 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398944 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398961 4768 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.398978 4768 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.407484 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.407657 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7" (OuterVolumeSpecName: "kube-api-access-fdzd7") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "kube-api-access-fdzd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.409335 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "17a83d5e-e5e7-422d-ab0e-647ca2eefb37" (UID: "17a83d5e-e5e7-422d-ab0e-647ca2eefb37"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.413002 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovnkube-controller/3.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.416103 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovn-acl-logging/0.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.416814 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-98lk9_17a83d5e-e5e7-422d-ab0e-647ca2eefb37/ovn-controller/0.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417198 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417231 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417241 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417253 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417263 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417274 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" exitCode=0 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417284 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" exitCode=143 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417273 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417341 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417374 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417412 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417294 4768 generic.go:334] "Generic (PLEG): container finished" podID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" exitCode=143 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417417 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417546 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417571 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417587 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417596 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417604 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417612 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417619 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417627 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417634 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417642 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417665 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417674 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417681 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417688 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417696 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417703 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417710 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417717 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417724 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417731 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417751 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417759 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417767 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417774 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417781 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417790 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417797 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417804 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417811 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417818 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-98lk9" event={"ID":"17a83d5e-e5e7-422d-ab0e-647ca2eefb37","Type":"ContainerDied","Data":"159e1a87394f186553e65b9f559112b99abb5025bb98eb3095bce647f632a919"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417839 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417849 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417857 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417865 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417874 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417881 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417889 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417896 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417903 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.417911 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.419214 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/2.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.419875 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/1.log" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.419960 4768 generic.go:334] "Generic (PLEG): container finished" podID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" containerID="2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941" exitCode=2 Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.420041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerDied","Data":"2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.420106 4768 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35"} Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.421064 4768 scope.go:117] "RemoveContainer" containerID="2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.421730 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-k8vfj_openshift-multus(b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a)\"" pod="openshift-multus/multus-k8vfj" podUID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.448822 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.473571 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-98lk9"] Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.477801 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-98lk9"] Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.485804 4768 scope.go:117] "RemoveContainer" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-systemd-units\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500066 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-var-lib-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500109 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovn-node-metrics-cert\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500148 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-kubelet\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-log-socket\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500252 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-netd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500283 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-node-log\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500338 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-bin\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500419 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-ovn\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500457 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-env-overrides\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500511 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-script-lib\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-slash\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500578 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500637 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgr5\" (UniqueName: \"kubernetes.io/projected/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-kube-api-access-qtgr5\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500670 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-etc-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-systemd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-netns\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-config\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500875 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdzd7\" (UniqueName: \"kubernetes.io/projected/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-kube-api-access-fdzd7\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500898 4768 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.500919 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/17a83d5e-e5e7-422d-ab0e-647ca2eefb37-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.501791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-systemd-units\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.501847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-var-lib-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-config\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502413 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-ovn\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502586 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-netd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-kubelet\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-log-socket\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502783 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-node-log\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502862 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-cni-bin\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502914 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-etc-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502946 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-ovn-kubernetes\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.502978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-slash\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.503073 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-env-overrides\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.503145 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-openvswitch\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.503145 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-run-systemd\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.503486 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-host-run-netns\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.505106 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovnkube-script-lib\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.505833 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-ovn-node-metrics-cert\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.506151 4768 scope.go:117] "RemoveContainer" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.519504 4768 scope.go:117] "RemoveContainer" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.523590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgr5\" (UniqueName: \"kubernetes.io/projected/a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae-kube-api-access-qtgr5\") pod \"ovnkube-node-9g9pn\" (UID: \"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae\") " pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.532534 4768 scope.go:117] "RemoveContainer" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.547825 4768 scope.go:117] "RemoveContainer" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.564204 4768 scope.go:117] "RemoveContainer" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.588114 4768 scope.go:117] "RemoveContainer" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.595114 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.596233 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a83d5e-e5e7-422d-ab0e-647ca2eefb37" path="/var/lib/kubelet/pods/17a83d5e-e5e7-422d-ab0e-647ca2eefb37/volumes" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.618615 4768 scope.go:117] "RemoveContainer" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.638449 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.638853 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.638881 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} err="failed to get container status \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.638903 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.639269 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": container with ID starting with 10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07 not found: ID does not exist" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.639309 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} err="failed to get container status \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": rpc error: code = NotFound desc = could not find container \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": container with ID starting with 10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.639327 4768 scope.go:117] "RemoveContainer" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.639678 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": container with ID starting with bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572 not found: ID does not exist" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.639697 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} err="failed to get container status \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": rpc error: code = NotFound desc = could not find container \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": container with ID starting with bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.639709 4768 scope.go:117] "RemoveContainer" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.640001 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": container with ID starting with 0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169 not found: ID does not exist" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640025 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} err="failed to get container status \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": rpc error: code = NotFound desc = could not find container \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": container with ID starting with 0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640038 4768 scope.go:117] "RemoveContainer" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.640418 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": container with ID starting with d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92 not found: ID does not exist" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640455 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} err="failed to get container status \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": rpc error: code = NotFound desc = could not find container \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": container with ID starting with d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640483 4768 scope.go:117] "RemoveContainer" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.640766 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": container with ID starting with 9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4 not found: ID does not exist" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640788 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} err="failed to get container status \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": rpc error: code = NotFound desc = could not find container \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": container with ID starting with 9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.640800 4768 scope.go:117] "RemoveContainer" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.641056 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": container with ID starting with 5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3 not found: ID does not exist" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641118 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} err="failed to get container status \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": rpc error: code = NotFound desc = could not find container \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": container with ID starting with 5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641156 4768 scope.go:117] "RemoveContainer" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.641573 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": container with ID starting with 5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec not found: ID does not exist" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641598 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} err="failed to get container status \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": rpc error: code = NotFound desc = could not find container \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": container with ID starting with 5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641613 4768 scope.go:117] "RemoveContainer" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.641930 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": container with ID starting with a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9 not found: ID does not exist" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641952 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} err="failed to get container status \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": rpc error: code = NotFound desc = could not find container \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": container with ID starting with a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.641965 4768 scope.go:117] "RemoveContainer" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: E1124 17:01:29.642191 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": container with ID starting with e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa not found: ID does not exist" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642229 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} err="failed to get container status \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": rpc error: code = NotFound desc = could not find container \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": container with ID starting with e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642254 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642588 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} err="failed to get container status \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642607 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642908 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} err="failed to get container status \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": rpc error: code = NotFound desc = could not find container \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": container with ID starting with 10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.642943 4768 scope.go:117] "RemoveContainer" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643218 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} err="failed to get container status \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": rpc error: code = NotFound desc = could not find container \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": container with ID starting with bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643237 4768 scope.go:117] "RemoveContainer" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643516 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} err="failed to get container status \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": rpc error: code = NotFound desc = could not find container \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": container with ID starting with 0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643533 4768 scope.go:117] "RemoveContainer" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643779 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} err="failed to get container status \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": rpc error: code = NotFound desc = could not find container \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": container with ID starting with d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.643807 4768 scope.go:117] "RemoveContainer" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.644099 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} err="failed to get container status \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": rpc error: code = NotFound desc = could not find container \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": container with ID starting with 9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.644223 4768 scope.go:117] "RemoveContainer" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.644657 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} err="failed to get container status \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": rpc error: code = NotFound desc = could not find container \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": container with ID starting with 5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.644687 4768 scope.go:117] "RemoveContainer" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.644977 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} err="failed to get container status \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": rpc error: code = NotFound desc = could not find container \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": container with ID starting with 5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.645093 4768 scope.go:117] "RemoveContainer" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.645519 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} err="failed to get container status \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": rpc error: code = NotFound desc = could not find container \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": container with ID starting with a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.645558 4768 scope.go:117] "RemoveContainer" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.645867 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} err="failed to get container status \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": rpc error: code = NotFound desc = could not find container \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": container with ID starting with e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.645895 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.646218 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} err="failed to get container status \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.646364 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.646798 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} err="failed to get container status \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": rpc error: code = NotFound desc = could not find container \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": container with ID starting with 10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.646836 4768 scope.go:117] "RemoveContainer" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647124 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} err="failed to get container status \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": rpc error: code = NotFound desc = could not find container \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": container with ID starting with bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647149 4768 scope.go:117] "RemoveContainer" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647461 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} err="failed to get container status \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": rpc error: code = NotFound desc = could not find container \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": container with ID starting with 0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647565 4768 scope.go:117] "RemoveContainer" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647974 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} err="failed to get container status \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": rpc error: code = NotFound desc = could not find container \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": container with ID starting with d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.647997 4768 scope.go:117] "RemoveContainer" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.648297 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} err="failed to get container status \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": rpc error: code = NotFound desc = could not find container \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": container with ID starting with 9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.648415 4768 scope.go:117] "RemoveContainer" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.648805 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} err="failed to get container status \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": rpc error: code = NotFound desc = could not find container \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": container with ID starting with 5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.648835 4768 scope.go:117] "RemoveContainer" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649117 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} err="failed to get container status \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": rpc error: code = NotFound desc = could not find container \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": container with ID starting with 5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649142 4768 scope.go:117] "RemoveContainer" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649443 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} err="failed to get container status \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": rpc error: code = NotFound desc = could not find container \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": container with ID starting with a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649539 4768 scope.go:117] "RemoveContainer" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649912 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} err="failed to get container status \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": rpc error: code = NotFound desc = could not find container \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": container with ID starting with e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.649935 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.650229 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} err="failed to get container status \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.650330 4768 scope.go:117] "RemoveContainer" containerID="10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.650710 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07"} err="failed to get container status \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": rpc error: code = NotFound desc = could not find container \"10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07\": container with ID starting with 10e1b5d93af7057474c8a885d4acfc2e466ec543d4be74fb8208a01f958dbb07 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.650729 4768 scope.go:117] "RemoveContainer" containerID="bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.650983 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572"} err="failed to get container status \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": rpc error: code = NotFound desc = could not find container \"bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572\": container with ID starting with bdafe7788467fc5d83d031a78712c5e280c3e98ec5dcf1746ea08b2a9c23e572 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.651084 4768 scope.go:117] "RemoveContainer" containerID="0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.651445 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169"} err="failed to get container status \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": rpc error: code = NotFound desc = could not find container \"0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169\": container with ID starting with 0094c311c53e806bd5191b5c0045705b9131090595c5020f3327cbdddeba0169 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.651470 4768 scope.go:117] "RemoveContainer" containerID="d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.651731 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92"} err="failed to get container status \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": rpc error: code = NotFound desc = could not find container \"d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92\": container with ID starting with d352c2279682b0d0e6a62f49330c43eee73de65b3aa930e2eee382142ba65e92 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.651843 4768 scope.go:117] "RemoveContainer" containerID="9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.652181 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4"} err="failed to get container status \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": rpc error: code = NotFound desc = could not find container \"9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4\": container with ID starting with 9ba39715a7ba4e282ae1b70f24ad9424718a5c4f32dc2daec453d773e109c6c4 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.652201 4768 scope.go:117] "RemoveContainer" containerID="5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.652518 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3"} err="failed to get container status \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": rpc error: code = NotFound desc = could not find container \"5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3\": container with ID starting with 5d03487d3d4af0043308256cecefdd13e04f4d2bc280415ca26bdc4ce630bbf3 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.652612 4768 scope.go:117] "RemoveContainer" containerID="5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653005 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec"} err="failed to get container status \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": rpc error: code = NotFound desc = could not find container \"5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec\": container with ID starting with 5e97e2da9567ec4a23585b945b7911a72c4efe17c542864d58114a2709c6afec not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653025 4768 scope.go:117] "RemoveContainer" containerID="a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653392 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9"} err="failed to get container status \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": rpc error: code = NotFound desc = could not find container \"a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9\": container with ID starting with a69f0f5291d1fdec31b034d3e4e4f4a1700dfa98e47e9966f2083a9de06fd7d9 not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653503 4768 scope.go:117] "RemoveContainer" containerID="e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653906 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa"} err="failed to get container status \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": rpc error: code = NotFound desc = could not find container \"e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa\": container with ID starting with e0da700bcc436b65c1a8b178e313975b1b3c188107fcd89ef9f4ea02fd1c5dfa not found: ID does not exist" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.653927 4768 scope.go:117] "RemoveContainer" containerID="a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4" Nov 24 17:01:29 crc kubenswrapper[4768]: I1124 17:01:29.654236 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4"} err="failed to get container status \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": rpc error: code = NotFound desc = could not find container \"a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4\": container with ID starting with a75836d53186761d810f5b508221bce69666f48f60c3a9f3a5bc329428a91bb4 not found: ID does not exist" Nov 24 17:01:30 crc kubenswrapper[4768]: I1124 17:01:30.428674 4768 generic.go:334] "Generic (PLEG): container finished" podID="a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae" containerID="38abc1157e2e0ce6810ce760b7ba7438f350d1180fbaed3a8641e40cf15eb553" exitCode=0 Nov 24 17:01:30 crc kubenswrapper[4768]: I1124 17:01:30.428790 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerDied","Data":"38abc1157e2e0ce6810ce760b7ba7438f350d1180fbaed3a8641e40cf15eb553"} Nov 24 17:01:30 crc kubenswrapper[4768]: I1124 17:01:30.428848 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"8b48c35d30ef92e09b7b6e8a74435e87f358d3c4ccd8e73e75bf526d33803c77"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.439980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"1f09665028b07b29bd56f502937f60ab8112b3ab2560462f9b171cfeb266c8bc"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.440513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"b0b9d69bcc6ece105889a435ed1e4e28547587b348102ee2d7c66498ce41915b"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.440530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"8bf7a1e2a4f4e5a666ec841547788ff1ab10fbca18781d11ae43baac0326af9d"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.440542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"95eba49b1ea9a206cfae89f165edb77c2e693f40aeaf37bf8bb10a34e344cf21"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.440556 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"a9e8f9cdf6ed75b7f520967dbb5ba830a700393400dad669afc2d111c7df8f03"} Nov 24 17:01:31 crc kubenswrapper[4768]: I1124 17:01:31.440566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"baecd46a4f1c9579422efab54168f10b3ca4705b0dce8f2e6400fdca7778e7d8"} Nov 24 17:01:33 crc kubenswrapper[4768]: I1124 17:01:33.524103 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-kd7w2" Nov 24 17:01:34 crc kubenswrapper[4768]: I1124 17:01:34.469550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"794be023656a3def5d10f72dec71f200eacb50dae12aed449c27721f9b9cd682"} Nov 24 17:01:34 crc kubenswrapper[4768]: I1124 17:01:34.893315 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:01:34 crc kubenswrapper[4768]: I1124 17:01:34.893462 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:01:36 crc kubenswrapper[4768]: I1124 17:01:36.486970 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" event={"ID":"a6f75828-e0f6-4f7d-8a52-4e02dff2a0ae","Type":"ContainerStarted","Data":"fb3b4edfd87160ebbde9cdaf66337e6dc9b6aaed22cad3b55d6260f6903763d8"} Nov 24 17:01:36 crc kubenswrapper[4768]: I1124 17:01:36.487555 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:36 crc kubenswrapper[4768]: I1124 17:01:36.525180 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:36 crc kubenswrapper[4768]: I1124 17:01:36.533213 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" podStartSLOduration=7.533201677 podStartE2EDuration="7.533201677s" podCreationTimestamp="2025-11-24 17:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:01:36.530032761 +0000 UTC m=+577.777001429" watchObservedRunningTime="2025-11-24 17:01:36.533201677 +0000 UTC m=+577.780170335" Nov 24 17:01:37 crc kubenswrapper[4768]: I1124 17:01:37.493421 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:37 crc kubenswrapper[4768]: I1124 17:01:37.493503 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:37 crc kubenswrapper[4768]: I1124 17:01:37.525518 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:40 crc kubenswrapper[4768]: I1124 17:01:40.580980 4768 scope.go:117] "RemoveContainer" containerID="2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941" Nov 24 17:01:40 crc kubenswrapper[4768]: E1124 17:01:40.581522 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-k8vfj_openshift-multus(b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a)\"" pod="openshift-multus/multus-k8vfj" podUID="b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a" Nov 24 17:01:51 crc kubenswrapper[4768]: I1124 17:01:51.581318 4768 scope.go:117] "RemoveContainer" containerID="2fbf7caa990d15db46c9ad04c45497db183c9d27d796bac50c5946e2dbdeb941" Nov 24 17:01:52 crc kubenswrapper[4768]: I1124 17:01:52.600574 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/2.log" Nov 24 17:01:52 crc kubenswrapper[4768]: I1124 17:01:52.601432 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/1.log" Nov 24 17:01:52 crc kubenswrapper[4768]: I1124 17:01:52.601496 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-k8vfj" event={"ID":"b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a","Type":"ContainerStarted","Data":"a947aa7d32e357426dfb76370688bfdef83dc9d26e7f916d10a937a168c4bd2a"} Nov 24 17:01:59 crc kubenswrapper[4768]: I1124 17:01:59.634754 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9g9pn" Nov 24 17:01:59 crc kubenswrapper[4768]: I1124 17:01:59.770893 4768 scope.go:117] "RemoveContainer" containerID="e9a43420d4b39e1291af651377602da94003efadb5a395178d644b9333412e35" Nov 24 17:02:00 crc kubenswrapper[4768]: I1124 17:02:00.652596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-k8vfj_b26d6a15-e6fc-4524-b0d4-f6b6bc195f8a/kube-multus/2.log" Nov 24 17:02:04 crc kubenswrapper[4768]: I1124 17:02:04.893516 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:02:04 crc kubenswrapper[4768]: I1124 17:02:04.893870 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.454541 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp"] Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.456094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.457673 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.466783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp"] Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.587445 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.587503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjvl\" (UniqueName: \"kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.587545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.689230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.689291 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjvl\" (UniqueName: \"kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.689324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.689825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.689863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.710526 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjvl\" (UniqueName: \"kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:10 crc kubenswrapper[4768]: I1124 17:02:10.772405 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:11 crc kubenswrapper[4768]: I1124 17:02:11.021548 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp"] Nov 24 17:02:11 crc kubenswrapper[4768]: W1124 17:02:11.032389 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c009ab_380d_4bc7_a771_61d41ad10d35.slice/crio-9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7 WatchSource:0}: Error finding container 9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7: Status 404 returned error can't find the container with id 9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7 Nov 24 17:02:11 crc kubenswrapper[4768]: I1124 17:02:11.725239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerStarted","Data":"17fcc6bbecaa4374b37857f4813171887ef7c60aafbf34f03ff9924e509a94a6"} Nov 24 17:02:11 crc kubenswrapper[4768]: I1124 17:02:11.725287 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerStarted","Data":"9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7"} Nov 24 17:02:12 crc kubenswrapper[4768]: I1124 17:02:12.736530 4768 generic.go:334] "Generic (PLEG): container finished" podID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerID="17fcc6bbecaa4374b37857f4813171887ef7c60aafbf34f03ff9924e509a94a6" exitCode=0 Nov 24 17:02:12 crc kubenswrapper[4768]: I1124 17:02:12.736615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerDied","Data":"17fcc6bbecaa4374b37857f4813171887ef7c60aafbf34f03ff9924e509a94a6"} Nov 24 17:02:14 crc kubenswrapper[4768]: I1124 17:02:14.752996 4768 generic.go:334] "Generic (PLEG): container finished" podID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerID="a85e4d5c9e47104be4cb79a0f45513d8cc28b13376a225b97fbc3082717a8816" exitCode=0 Nov 24 17:02:14 crc kubenswrapper[4768]: I1124 17:02:14.753148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerDied","Data":"a85e4d5c9e47104be4cb79a0f45513d8cc28b13376a225b97fbc3082717a8816"} Nov 24 17:02:15 crc kubenswrapper[4768]: I1124 17:02:15.767515 4768 generic.go:334] "Generic (PLEG): container finished" podID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerID="ba20e343a5c002676eb108bd49ea172feca391274070139c72a6dc2ef3bf7fd6" exitCode=0 Nov 24 17:02:15 crc kubenswrapper[4768]: I1124 17:02:15.767582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerDied","Data":"ba20e343a5c002676eb108bd49ea172feca391274070139c72a6dc2ef3bf7fd6"} Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.113620 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.279853 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle\") pod \"30c009ab-380d-4bc7-a771-61d41ad10d35\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.279923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrjvl\" (UniqueName: \"kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl\") pod \"30c009ab-380d-4bc7-a771-61d41ad10d35\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.279976 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util\") pod \"30c009ab-380d-4bc7-a771-61d41ad10d35\" (UID: \"30c009ab-380d-4bc7-a771-61d41ad10d35\") " Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.281054 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle" (OuterVolumeSpecName: "bundle") pod "30c009ab-380d-4bc7-a771-61d41ad10d35" (UID: "30c009ab-380d-4bc7-a771-61d41ad10d35"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.290775 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl" (OuterVolumeSpecName: "kube-api-access-vrjvl") pod "30c009ab-380d-4bc7-a771-61d41ad10d35" (UID: "30c009ab-380d-4bc7-a771-61d41ad10d35"). InnerVolumeSpecName "kube-api-access-vrjvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.291434 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util" (OuterVolumeSpecName: "util") pod "30c009ab-380d-4bc7-a771-61d41ad10d35" (UID: "30c009ab-380d-4bc7-a771-61d41ad10d35"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.381308 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-util\") on node \"crc\" DevicePath \"\"" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.381397 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30c009ab-380d-4bc7-a771-61d41ad10d35-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.381417 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrjvl\" (UniqueName: \"kubernetes.io/projected/30c009ab-380d-4bc7-a771-61d41ad10d35-kube-api-access-vrjvl\") on node \"crc\" DevicePath \"\"" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.796795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" event={"ID":"30c009ab-380d-4bc7-a771-61d41ad10d35","Type":"ContainerDied","Data":"9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7"} Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.796845 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d86ba1a869b00c3f5fc9e023d33a1301591d3735a35995cc688e0bd9c2d04f7" Nov 24 17:02:17 crc kubenswrapper[4768]: I1124 17:02:17.796918 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.334042 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pqvp5"] Nov 24 17:02:22 crc kubenswrapper[4768]: E1124 17:02:22.334554 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="util" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.334566 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="util" Nov 24 17:02:22 crc kubenswrapper[4768]: E1124 17:02:22.334582 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="extract" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.334590 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="extract" Nov 24 17:02:22 crc kubenswrapper[4768]: E1124 17:02:22.334602 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="pull" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.334607 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="pull" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.334712 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c009ab-380d-4bc7-a771-61d41ad10d35" containerName="extract" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.335079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.338144 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ss5wn" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.340528 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.340650 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.350081 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pqvp5"] Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.455671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vn4r\" (UniqueName: \"kubernetes.io/projected/0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b-kube-api-access-2vn4r\") pod \"nmstate-operator-557fdffb88-pqvp5\" (UID: \"0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.556788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vn4r\" (UniqueName: \"kubernetes.io/projected/0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b-kube-api-access-2vn4r\") pod \"nmstate-operator-557fdffb88-pqvp5\" (UID: \"0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.585857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vn4r\" (UniqueName: \"kubernetes.io/projected/0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b-kube-api-access-2vn4r\") pod \"nmstate-operator-557fdffb88-pqvp5\" (UID: \"0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" Nov 24 17:02:22 crc kubenswrapper[4768]: I1124 17:02:22.649659 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" Nov 24 17:02:23 crc kubenswrapper[4768]: I1124 17:02:23.036794 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-pqvp5"] Nov 24 17:02:23 crc kubenswrapper[4768]: I1124 17:02:23.835828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" event={"ID":"0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b","Type":"ContainerStarted","Data":"89f75177fc5d7a93c005e166893255498e641cd40c9b41824b9f1c5e22303940"} Nov 24 17:02:25 crc kubenswrapper[4768]: I1124 17:02:25.850768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" event={"ID":"0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b","Type":"ContainerStarted","Data":"141af37946df3ca72e0aed4cce2d6bc559eaa6a6dfa5b7a709c5c99e7be21cf1"} Nov 24 17:02:25 crc kubenswrapper[4768]: I1124 17:02:25.871293 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-pqvp5" podStartSLOduration=1.753378526 podStartE2EDuration="3.871271894s" podCreationTimestamp="2025-11-24 17:02:22 +0000 UTC" firstStartedPulling="2025-11-24 17:02:23.051026208 +0000 UTC m=+624.297994866" lastFinishedPulling="2025-11-24 17:02:25.168919576 +0000 UTC m=+626.415888234" observedRunningTime="2025-11-24 17:02:25.866685189 +0000 UTC m=+627.113653857" watchObservedRunningTime="2025-11-24 17:02:25.871271894 +0000 UTC m=+627.118240562" Nov 24 17:02:30 crc kubenswrapper[4768]: I1124 17:02:30.991035 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5"] Nov 24 17:02:30 crc kubenswrapper[4768]: I1124 17:02:30.993278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" Nov 24 17:02:30 crc kubenswrapper[4768]: I1124 17:02:30.996476 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rcngn" Nov 24 17:02:30 crc kubenswrapper[4768]: I1124 17:02:30.996684 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n"] Nov 24 17:02:30 crc kubenswrapper[4768]: I1124 17:02:30.997715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.005857 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.023104 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.026942 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.044915 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p2zsh"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.046579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093193 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpzx\" (UniqueName: \"kubernetes.io/projected/75419742-7b67-4c11-9d45-2db75c1d8342-kube-api-access-lxpzx\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093278 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc6zh\" (UniqueName: \"kubernetes.io/projected/5fd414a4-49e9-44b7-8207-e4edb7887dba-kube-api-access-fc6zh\") pod \"nmstate-metrics-5dcf9c57c5-5hkm5\" (UID: \"5fd414a4-49e9-44b7-8207-e4edb7887dba\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-ovs-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093340 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3755f3c6-8022-4edb-8efe-b858b58cf052-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093386 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-nmstate-lock\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvjn\" (UniqueName: \"kubernetes.io/projected/3755f3c6-8022-4edb-8efe-b858b58cf052-kube-api-access-gpvjn\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.093453 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-dbus-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.134376 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.135050 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.139735 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.139852 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-87vgn" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.139931 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.150189 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.193914 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-dbus-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.193978 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f14df85-542b-433f-a661-79f1707a03ad-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194014 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxpzx\" (UniqueName: \"kubernetes.io/projected/75419742-7b67-4c11-9d45-2db75c1d8342-kube-api-access-lxpzx\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194245 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-dbus-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc6zh\" (UniqueName: \"kubernetes.io/projected/5fd414a4-49e9-44b7-8207-e4edb7887dba-kube-api-access-fc6zh\") pod \"nmstate-metrics-5dcf9c57c5-5hkm5\" (UID: \"5fd414a4-49e9-44b7-8207-e4edb7887dba\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194377 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-ovs-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxb58\" (UniqueName: \"kubernetes.io/projected/8f14df85-542b-433f-a661-79f1707a03ad-kube-api-access-jxb58\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3755f3c6-8022-4edb-8efe-b858b58cf052-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194490 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-ovs-socket\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194499 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-nmstate-lock\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/75419742-7b67-4c11-9d45-2db75c1d8342-nmstate-lock\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.194561 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvjn\" (UniqueName: \"kubernetes.io/projected/3755f3c6-8022-4edb-8efe-b858b58cf052-kube-api-access-gpvjn\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.200576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3755f3c6-8022-4edb-8efe-b858b58cf052-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.208794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc6zh\" (UniqueName: \"kubernetes.io/projected/5fd414a4-49e9-44b7-8207-e4edb7887dba-kube-api-access-fc6zh\") pod \"nmstate-metrics-5dcf9c57c5-5hkm5\" (UID: \"5fd414a4-49e9-44b7-8207-e4edb7887dba\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.212606 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvjn\" (UniqueName: \"kubernetes.io/projected/3755f3c6-8022-4edb-8efe-b858b58cf052-kube-api-access-gpvjn\") pod \"nmstate-webhook-6b89b748d8-rht2n\" (UID: \"3755f3c6-8022-4edb-8efe-b858b58cf052\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.221032 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxpzx\" (UniqueName: \"kubernetes.io/projected/75419742-7b67-4c11-9d45-2db75c1d8342-kube-api-access-lxpzx\") pod \"nmstate-handler-p2zsh\" (UID: \"75419742-7b67-4c11-9d45-2db75c1d8342\") " pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.295458 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxb58\" (UniqueName: \"kubernetes.io/projected/8f14df85-542b-433f-a661-79f1707a03ad-kube-api-access-jxb58\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.295530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f14df85-542b-433f-a661-79f1707a03ad-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.295553 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: E1124 17:02:31.295672 4768 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 24 17:02:31 crc kubenswrapper[4768]: E1124 17:02:31.295727 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert podName:8f14df85-542b-433f-a661-79f1707a03ad nodeName:}" failed. No retries permitted until 2025-11-24 17:02:31.795707129 +0000 UTC m=+633.042675787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-6zplx" (UID: "8f14df85-542b-433f-a661-79f1707a03ad") : secret "plugin-serving-cert" not found Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.296574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8f14df85-542b-433f-a661-79f1707a03ad-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.310027 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58c6fb9d58-m4zs4"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.310723 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.314054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxb58\" (UniqueName: \"kubernetes.io/projected/8f14df85-542b-433f-a661-79f1707a03ad-kube-api-access-jxb58\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.320550 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.324603 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58c6fb9d58-m4zs4"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.334466 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.366146 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399234 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtf7l\" (UniqueName: \"kubernetes.io/projected/ee722cdf-2378-42f8-aca6-5bb120809e26-kube-api-access-qtf7l\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399283 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-oauth-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-console-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399343 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-oauth-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399414 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399445 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-trusted-ca-bundle\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.399524 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-service-ca\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-oauth-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-trusted-ca-bundle\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-service-ca\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502181 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtf7l\" (UniqueName: \"kubernetes.io/projected/ee722cdf-2378-42f8-aca6-5bb120809e26-kube-api-access-qtf7l\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-oauth-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.502239 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-console-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.503067 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-console-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.503587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-oauth-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.505402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-service-ca\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.506370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee722cdf-2378-42f8-aca6-5bb120809e26-trusted-ca-bundle\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.509395 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-oauth-config\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.509914 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee722cdf-2378-42f8-aca6-5bb120809e26-console-serving-cert\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.533109 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtf7l\" (UniqueName: \"kubernetes.io/projected/ee722cdf-2378-42f8-aca6-5bb120809e26-kube-api-access-qtf7l\") pod \"console-58c6fb9d58-m4zs4\" (UID: \"ee722cdf-2378-42f8-aca6-5bb120809e26\") " pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.569463 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5"] Nov 24 17:02:31 crc kubenswrapper[4768]: W1124 17:02:31.569995 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd414a4_49e9_44b7_8207_e4edb7887dba.slice/crio-26ca930ad2c21fcd69e33f3b99b3afd4f13974c553d8fbddc99c41d31c304188 WatchSource:0}: Error finding container 26ca930ad2c21fcd69e33f3b99b3afd4f13974c553d8fbddc99c41d31c304188: Status 404 returned error can't find the container with id 26ca930ad2c21fcd69e33f3b99b3afd4f13974c553d8fbddc99c41d31c304188 Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.616746 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n"] Nov 24 17:02:31 crc kubenswrapper[4768]: W1124 17:02:31.620718 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3755f3c6_8022_4edb_8efe_b858b58cf052.slice/crio-b30eedee9158920408886f92599d66ec7ece8ef1a81a569821bf3355cecad26b WatchSource:0}: Error finding container b30eedee9158920408886f92599d66ec7ece8ef1a81a569821bf3355cecad26b: Status 404 returned error can't find the container with id b30eedee9158920408886f92599d66ec7ece8ef1a81a569821bf3355cecad26b Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.681929 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.805950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.814239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f14df85-542b-433f-a661-79f1707a03ad-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-6zplx\" (UID: \"8f14df85-542b-433f-a661-79f1707a03ad\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.885876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" event={"ID":"5fd414a4-49e9-44b7-8207-e4edb7887dba","Type":"ContainerStarted","Data":"26ca930ad2c21fcd69e33f3b99b3afd4f13974c553d8fbddc99c41d31c304188"} Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.892886 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" event={"ID":"3755f3c6-8022-4edb-8efe-b858b58cf052","Type":"ContainerStarted","Data":"b30eedee9158920408886f92599d66ec7ece8ef1a81a569821bf3355cecad26b"} Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.893006 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58c6fb9d58-m4zs4"] Nov 24 17:02:31 crc kubenswrapper[4768]: I1124 17:02:31.893817 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p2zsh" event={"ID":"75419742-7b67-4c11-9d45-2db75c1d8342","Type":"ContainerStarted","Data":"440c796ad93b7179baeccfbd160259f1a6af6958fc5377a406afa9465b1d411c"} Nov 24 17:02:31 crc kubenswrapper[4768]: W1124 17:02:31.901292 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee722cdf_2378_42f8_aca6_5bb120809e26.slice/crio-9fc14a8400e142f4ff19d7abd4d1b454ce1fcbb6aaa0ea635e8854dcde02cff0 WatchSource:0}: Error finding container 9fc14a8400e142f4ff19d7abd4d1b454ce1fcbb6aaa0ea635e8854dcde02cff0: Status 404 returned error can't find the container with id 9fc14a8400e142f4ff19d7abd4d1b454ce1fcbb6aaa0ea635e8854dcde02cff0 Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.056595 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.299443 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx"] Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.901178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58c6fb9d58-m4zs4" event={"ID":"ee722cdf-2378-42f8-aca6-5bb120809e26","Type":"ContainerStarted","Data":"5a306cde4639d538b57012fc5998775aeeb3c4f974dd8c9427cf61c9787731c4"} Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.901237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58c6fb9d58-m4zs4" event={"ID":"ee722cdf-2378-42f8-aca6-5bb120809e26","Type":"ContainerStarted","Data":"9fc14a8400e142f4ff19d7abd4d1b454ce1fcbb6aaa0ea635e8854dcde02cff0"} Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.902765 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" event={"ID":"8f14df85-542b-433f-a661-79f1707a03ad","Type":"ContainerStarted","Data":"32bb847cdf6ce9642936ce000aa3b9f5b77e20932caf4d072e725953f1ed2ffb"} Nov 24 17:02:32 crc kubenswrapper[4768]: I1124 17:02:32.925712 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58c6fb9d58-m4zs4" podStartSLOduration=1.925695685 podStartE2EDuration="1.925695685s" podCreationTimestamp="2025-11-24 17:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:02:32.923538676 +0000 UTC m=+634.170507354" watchObservedRunningTime="2025-11-24 17:02:32.925695685 +0000 UTC m=+634.172664353" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.893002 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.895917 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.895993 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.897023 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.897274 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6" gracePeriod=600 Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.918314 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" event={"ID":"3755f3c6-8022-4edb-8efe-b858b58cf052","Type":"ContainerStarted","Data":"eceeccace7f26d45e5e5f7d45f59ff24d0bc31d47d158b5f5eb9254c3b52af46"} Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.918772 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.921778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p2zsh" event={"ID":"75419742-7b67-4c11-9d45-2db75c1d8342","Type":"ContainerStarted","Data":"22579ce8ec1ff22f88a7c63ca4e11a56d06cb61ce63fc6f733f60739d6b7d879"} Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.921923 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.923425 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" event={"ID":"5fd414a4-49e9-44b7-8207-e4edb7887dba","Type":"ContainerStarted","Data":"1f630b1ac7e1c4af334391aba6e2c9595fba3ea5ee9b6f070e585afe8b449946"} Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.940053 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" podStartSLOduration=2.700185757 podStartE2EDuration="4.940032382s" podCreationTimestamp="2025-11-24 17:02:30 +0000 UTC" firstStartedPulling="2025-11-24 17:02:31.623340719 +0000 UTC m=+632.870309377" lastFinishedPulling="2025-11-24 17:02:33.863187304 +0000 UTC m=+635.110156002" observedRunningTime="2025-11-24 17:02:34.937784361 +0000 UTC m=+636.184753029" watchObservedRunningTime="2025-11-24 17:02:34.940032382 +0000 UTC m=+636.187001040" Nov 24 17:02:34 crc kubenswrapper[4768]: I1124 17:02:34.960767 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p2zsh" podStartSLOduration=1.488784816 podStartE2EDuration="3.960722718s" podCreationTimestamp="2025-11-24 17:02:31 +0000 UTC" firstStartedPulling="2025-11-24 17:02:31.402779877 +0000 UTC m=+632.649748535" lastFinishedPulling="2025-11-24 17:02:33.874717739 +0000 UTC m=+635.121686437" observedRunningTime="2025-11-24 17:02:34.960009468 +0000 UTC m=+636.206978136" watchObservedRunningTime="2025-11-24 17:02:34.960722718 +0000 UTC m=+636.207691386" Nov 24 17:02:35 crc kubenswrapper[4768]: I1124 17:02:35.935506 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6" exitCode=0 Nov 24 17:02:35 crc kubenswrapper[4768]: I1124 17:02:35.936929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6"} Nov 24 17:02:35 crc kubenswrapper[4768]: I1124 17:02:35.939140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85"} Nov 24 17:02:35 crc kubenswrapper[4768]: I1124 17:02:35.939303 4768 scope.go:117] "RemoveContainer" containerID="7a329c2bcc79a9f3f10df267612fb0d9f6aef0e5add7ff881e55c584ace2a157" Nov 24 17:02:35 crc kubenswrapper[4768]: I1124 17:02:35.945854 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" event={"ID":"8f14df85-542b-433f-a661-79f1707a03ad","Type":"ContainerStarted","Data":"8723c75862b5cb0eb240b268689337b350ec47e254e788064e53261e302069e7"} Nov 24 17:02:36 crc kubenswrapper[4768]: I1124 17:02:36.959469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" event={"ID":"5fd414a4-49e9-44b7-8207-e4edb7887dba","Type":"ContainerStarted","Data":"a2ebecf14e923792b59d5293ba9fc30b0ba7f614f0cf29c5efee8c7c7775f680"} Nov 24 17:02:36 crc kubenswrapper[4768]: I1124 17:02:36.978616 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-6zplx" podStartSLOduration=3.328204712 podStartE2EDuration="5.978600893s" podCreationTimestamp="2025-11-24 17:02:31 +0000 UTC" firstStartedPulling="2025-11-24 17:02:32.313232006 +0000 UTC m=+633.560200664" lastFinishedPulling="2025-11-24 17:02:34.963628187 +0000 UTC m=+636.210596845" observedRunningTime="2025-11-24 17:02:35.977850894 +0000 UTC m=+637.224819542" watchObservedRunningTime="2025-11-24 17:02:36.978600893 +0000 UTC m=+638.225569551" Nov 24 17:02:36 crc kubenswrapper[4768]: I1124 17:02:36.980511 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-5hkm5" podStartSLOduration=2.202925229 podStartE2EDuration="6.980507325s" podCreationTimestamp="2025-11-24 17:02:30 +0000 UTC" firstStartedPulling="2025-11-24 17:02:31.57220337 +0000 UTC m=+632.819172018" lastFinishedPulling="2025-11-24 17:02:36.349785456 +0000 UTC m=+637.596754114" observedRunningTime="2025-11-24 17:02:36.976516906 +0000 UTC m=+638.223485564" watchObservedRunningTime="2025-11-24 17:02:36.980507325 +0000 UTC m=+638.227475983" Nov 24 17:02:41 crc kubenswrapper[4768]: I1124 17:02:41.402113 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p2zsh" Nov 24 17:02:41 crc kubenswrapper[4768]: I1124 17:02:41.683145 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:41 crc kubenswrapper[4768]: I1124 17:02:41.683380 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:41 crc kubenswrapper[4768]: I1124 17:02:41.689360 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:41 crc kubenswrapper[4768]: I1124 17:02:41.988578 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58c6fb9d58-m4zs4" Nov 24 17:02:42 crc kubenswrapper[4768]: I1124 17:02:42.034440 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 17:02:51 crc kubenswrapper[4768]: I1124 17:02:51.343483 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rht2n" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.072988 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bkp5p" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" containerID="cri-o://218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75" gracePeriod=15 Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.158479 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t"] Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.159938 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.162111 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.177661 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t"] Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.255686 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.255784 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7bk\" (UniqueName: \"kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.255811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.356905 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.357029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps7bk\" (UniqueName: \"kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.357071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.357583 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.357599 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.381310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps7bk\" (UniqueName: \"kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.474987 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.481521 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bkp5p_afbb3133-a1d9-48c9-a496-83babf4eb0c6/console/0.log" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.481599 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660530 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660596 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvw4j\" (UniqueName: \"kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660654 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660752 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.660799 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config\") pod \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\" (UID: \"afbb3133-a1d9-48c9-a496-83babf4eb0c6\") " Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.661730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca" (OuterVolumeSpecName: "service-ca") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.661869 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.662144 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.662321 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.662336 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config" (OuterVolumeSpecName: "console-config") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.667079 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.667221 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j" (OuterVolumeSpecName: "kube-api-access-pvw4j") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "kube-api-access-pvw4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.667699 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "afbb3133-a1d9-48c9-a496-83babf4eb0c6" (UID: "afbb3133-a1d9-48c9-a496-83babf4eb0c6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762731 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762762 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvw4j\" (UniqueName: \"kubernetes.io/projected/afbb3133-a1d9-48c9-a496-83babf4eb0c6-kube-api-access-pvw4j\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762778 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762789 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762800 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.762810 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/afbb3133-a1d9-48c9-a496-83babf4eb0c6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:07 crc kubenswrapper[4768]: I1124 17:03:07.889750 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t"] Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193603 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bkp5p_afbb3133-a1d9-48c9-a496-83babf4eb0c6/console/0.log" Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193666 4768 generic.go:334] "Generic (PLEG): container finished" podID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerID="218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75" exitCode=2 Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193742 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bkp5p" Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bkp5p" event={"ID":"afbb3133-a1d9-48c9-a496-83babf4eb0c6","Type":"ContainerDied","Data":"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75"} Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bkp5p" event={"ID":"afbb3133-a1d9-48c9-a496-83babf4eb0c6","Type":"ContainerDied","Data":"51a36575bf57360977a5a6c48335e611917f10b4bffd22c011c19e80ca181883"} Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.193892 4768 scope.go:117] "RemoveContainer" containerID="218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75" Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.196706 4768 generic.go:334] "Generic (PLEG): container finished" podID="f04516d3-2027-43f2-975d-294f284a7a36" containerID="4221d24cbb7105cb6d31e7ed7955005d49607253eb4c14f538ef6dddfa544b6e" exitCode=0 Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.196765 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" event={"ID":"f04516d3-2027-43f2-975d-294f284a7a36","Type":"ContainerDied","Data":"4221d24cbb7105cb6d31e7ed7955005d49607253eb4c14f538ef6dddfa544b6e"} Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.196799 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" event={"ID":"f04516d3-2027-43f2-975d-294f284a7a36","Type":"ContainerStarted","Data":"072cff5534fc8efb2c4b2642015ff2bcdbe2229084556c9e226eea3d4faf3302"} Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.229388 4768 scope.go:117] "RemoveContainer" containerID="218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75" Nov 24 17:03:08 crc kubenswrapper[4768]: E1124 17:03:08.230816 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75\": container with ID starting with 218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75 not found: ID does not exist" containerID="218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75" Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.230872 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75"} err="failed to get container status \"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75\": rpc error: code = NotFound desc = could not find container \"218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75\": container with ID starting with 218e0939b77034678aac0d04bc8ef289863a41eca407d44ad43365b08990de75 not found: ID does not exist" Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.235063 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 17:03:08 crc kubenswrapper[4768]: I1124 17:03:08.239068 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bkp5p"] Nov 24 17:03:09 crc kubenswrapper[4768]: I1124 17:03:09.600740 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" path="/var/lib/kubelet/pods/afbb3133-a1d9-48c9-a496-83babf4eb0c6/volumes" Nov 24 17:03:10 crc kubenswrapper[4768]: I1124 17:03:10.222570 4768 generic.go:334] "Generic (PLEG): container finished" podID="f04516d3-2027-43f2-975d-294f284a7a36" containerID="080d3f020d07a56eca735e36b5c70e0a2ca991bdad2eb97a4dca1a8518b60bb1" exitCode=0 Nov 24 17:03:10 crc kubenswrapper[4768]: I1124 17:03:10.222708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" event={"ID":"f04516d3-2027-43f2-975d-294f284a7a36","Type":"ContainerDied","Data":"080d3f020d07a56eca735e36b5c70e0a2ca991bdad2eb97a4dca1a8518b60bb1"} Nov 24 17:03:11 crc kubenswrapper[4768]: I1124 17:03:11.231746 4768 generic.go:334] "Generic (PLEG): container finished" podID="f04516d3-2027-43f2-975d-294f284a7a36" containerID="5f2a870ae21636e8145bcbb688f46cf5486e917202823a246d47c49a886c2c93" exitCode=0 Nov 24 17:03:11 crc kubenswrapper[4768]: I1124 17:03:11.231801 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" event={"ID":"f04516d3-2027-43f2-975d-294f284a7a36","Type":"ContainerDied","Data":"5f2a870ae21636e8145bcbb688f46cf5486e917202823a246d47c49a886c2c93"} Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.581545 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.742540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util\") pod \"f04516d3-2027-43f2-975d-294f284a7a36\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.742657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps7bk\" (UniqueName: \"kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk\") pod \"f04516d3-2027-43f2-975d-294f284a7a36\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.742700 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle\") pod \"f04516d3-2027-43f2-975d-294f284a7a36\" (UID: \"f04516d3-2027-43f2-975d-294f284a7a36\") " Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.744513 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle" (OuterVolumeSpecName: "bundle") pod "f04516d3-2027-43f2-975d-294f284a7a36" (UID: "f04516d3-2027-43f2-975d-294f284a7a36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.750546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk" (OuterVolumeSpecName: "kube-api-access-ps7bk") pod "f04516d3-2027-43f2-975d-294f284a7a36" (UID: "f04516d3-2027-43f2-975d-294f284a7a36"). InnerVolumeSpecName "kube-api-access-ps7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.762539 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util" (OuterVolumeSpecName: "util") pod "f04516d3-2027-43f2-975d-294f284a7a36" (UID: "f04516d3-2027-43f2-975d-294f284a7a36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.844335 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps7bk\" (UniqueName: \"kubernetes.io/projected/f04516d3-2027-43f2-975d-294f284a7a36-kube-api-access-ps7bk\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.844415 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:12 crc kubenswrapper[4768]: I1124 17:03:12.844436 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f04516d3-2027-43f2-975d-294f284a7a36-util\") on node \"crc\" DevicePath \"\"" Nov 24 17:03:13 crc kubenswrapper[4768]: I1124 17:03:13.252301 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" event={"ID":"f04516d3-2027-43f2-975d-294f284a7a36","Type":"ContainerDied","Data":"072cff5534fc8efb2c4b2642015ff2bcdbe2229084556c9e226eea3d4faf3302"} Nov 24 17:03:13 crc kubenswrapper[4768]: I1124 17:03:13.252820 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072cff5534fc8efb2c4b2642015ff2bcdbe2229084556c9e226eea3d4faf3302" Nov 24 17:03:13 crc kubenswrapper[4768]: I1124 17:03:13.252473 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.234578 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw"] Nov 24 17:03:22 crc kubenswrapper[4768]: E1124 17:03:22.235385 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="util" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235398 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="util" Nov 24 17:03:22 crc kubenswrapper[4768]: E1124 17:03:22.235418 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="pull" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235424 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="pull" Nov 24 17:03:22 crc kubenswrapper[4768]: E1124 17:03:22.235438 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="extract" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235444 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="extract" Nov 24 17:03:22 crc kubenswrapper[4768]: E1124 17:03:22.235451 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235457 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235574 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb3133-a1d9-48c9-a496-83babf4eb0c6" containerName="console" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235585 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04516d3-2027-43f2-975d-294f284a7a36" containerName="extract" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.235982 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.238382 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.238923 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.238936 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.239225 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-x96dn" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.239397 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.249667 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw"] Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.378685 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gwp4\" (UniqueName: \"kubernetes.io/projected/351e35d8-541a-43c5-b07d-affa44d1c013-kube-api-access-6gwp4\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.378768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-webhook-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.378894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.479800 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gwp4\" (UniqueName: \"kubernetes.io/projected/351e35d8-541a-43c5-b07d-affa44d1c013-kube-api-access-6gwp4\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.479892 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-webhook-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.479926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.486235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.486857 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/351e35d8-541a-43c5-b07d-affa44d1c013-webhook-cert\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.495995 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gwp4\" (UniqueName: \"kubernetes.io/projected/351e35d8-541a-43c5-b07d-affa44d1c013-kube-api-access-6gwp4\") pod \"metallb-operator-controller-manager-5cc97d846-2sqgw\" (UID: \"351e35d8-541a-43c5-b07d-affa44d1c013\") " pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.552235 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.590982 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx"] Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.591739 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.597667 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.597680 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.600642 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qsbj7" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.682458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.682505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-webhook-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.682528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zp6\" (UniqueName: \"kubernetes.io/projected/ca825a3d-d8e1-45ce-af38-6874f0b3c498-kube-api-access-w5zp6\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.687198 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx"] Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.783592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.783634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-webhook-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.783659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zp6\" (UniqueName: \"kubernetes.io/projected/ca825a3d-d8e1-45ce-af38-6874f0b3c498-kube-api-access-w5zp6\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.796970 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-webhook-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.798454 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca825a3d-d8e1-45ce-af38-6874f0b3c498-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.807022 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zp6\" (UniqueName: \"kubernetes.io/projected/ca825a3d-d8e1-45ce-af38-6874f0b3c498-kube-api-access-w5zp6\") pod \"metallb-operator-webhook-server-5f6bc667bb-56fwx\" (UID: \"ca825a3d-d8e1-45ce-af38-6874f0b3c498\") " pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.931479 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw"] Nov 24 17:03:22 crc kubenswrapper[4768]: I1124 17:03:22.940403 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:23 crc kubenswrapper[4768]: I1124 17:03:23.158564 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx"] Nov 24 17:03:23 crc kubenswrapper[4768]: W1124 17:03:23.159189 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca825a3d_d8e1_45ce_af38_6874f0b3c498.slice/crio-30d52518470a9b68fffd468b036a5ff8ffbfa1fc95e417efec15ccbb9b3919e0 WatchSource:0}: Error finding container 30d52518470a9b68fffd468b036a5ff8ffbfa1fc95e417efec15ccbb9b3919e0: Status 404 returned error can't find the container with id 30d52518470a9b68fffd468b036a5ff8ffbfa1fc95e417efec15ccbb9b3919e0 Nov 24 17:03:23 crc kubenswrapper[4768]: I1124 17:03:23.320849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" event={"ID":"ca825a3d-d8e1-45ce-af38-6874f0b3c498","Type":"ContainerStarted","Data":"30d52518470a9b68fffd468b036a5ff8ffbfa1fc95e417efec15ccbb9b3919e0"} Nov 24 17:03:23 crc kubenswrapper[4768]: I1124 17:03:23.321705 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" event={"ID":"351e35d8-541a-43c5-b07d-affa44d1c013","Type":"ContainerStarted","Data":"31e57c50008605480c38d37c835c4d6f0148198a58bd7a890003be76e7a4ae68"} Nov 24 17:03:27 crc kubenswrapper[4768]: I1124 17:03:27.359834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" event={"ID":"351e35d8-541a-43c5-b07d-affa44d1c013","Type":"ContainerStarted","Data":"45eb94d14ae73ddf94cea5d94460e380990d2c4cd5fca90a3108d591173d5ed6"} Nov 24 17:03:27 crc kubenswrapper[4768]: I1124 17:03:27.360808 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:03:27 crc kubenswrapper[4768]: I1124 17:03:27.381932 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" podStartSLOduration=1.966858213 podStartE2EDuration="5.381894017s" podCreationTimestamp="2025-11-24 17:03:22 +0000 UTC" firstStartedPulling="2025-11-24 17:03:22.948004399 +0000 UTC m=+684.194973057" lastFinishedPulling="2025-11-24 17:03:26.363040203 +0000 UTC m=+687.610008861" observedRunningTime="2025-11-24 17:03:27.377399074 +0000 UTC m=+688.624367732" watchObservedRunningTime="2025-11-24 17:03:27.381894017 +0000 UTC m=+688.628862675" Nov 24 17:03:28 crc kubenswrapper[4768]: I1124 17:03:28.369594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" event={"ID":"ca825a3d-d8e1-45ce-af38-6874f0b3c498","Type":"ContainerStarted","Data":"bf84224ca4141ddad530afa9a8baf58b2ef20aa3a92f8fd2d44f6dfe4a5592aa"} Nov 24 17:03:28 crc kubenswrapper[4768]: I1124 17:03:28.370338 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:03:28 crc kubenswrapper[4768]: I1124 17:03:28.392150 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" podStartSLOduration=1.453937005 podStartE2EDuration="6.392127264s" podCreationTimestamp="2025-11-24 17:03:22 +0000 UTC" firstStartedPulling="2025-11-24 17:03:23.162808676 +0000 UTC m=+684.409777344" lastFinishedPulling="2025-11-24 17:03:28.100998945 +0000 UTC m=+689.347967603" observedRunningTime="2025-11-24 17:03:28.38945726 +0000 UTC m=+689.636425938" watchObservedRunningTime="2025-11-24 17:03:28.392127264 +0000 UTC m=+689.639095922" Nov 24 17:03:42 crc kubenswrapper[4768]: I1124 17:03:42.947473 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f6bc667bb-56fwx" Nov 24 17:04:02 crc kubenswrapper[4768]: I1124 17:04:02.554453 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cc97d846-2sqgw" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.338732 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xt7mv"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.341540 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.343826 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.344337 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.344599 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hpr5k" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.358109 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2szdn"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.359100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.364677 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.383561 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2szdn"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392515 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-reloader\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392548 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76dzv\" (UniqueName: \"kubernetes.io/projected/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-kube-api-access-76dzv\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-sockets\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-conf\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics-certs\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.392888 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-startup\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.393006 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrd2\" (UniqueName: \"kubernetes.io/projected/7a21efe0-4145-43ac-9e98-31fecbc074d5-kube-api-access-qnrd2\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.393058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.471844 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9m6sf"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.473105 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.475994 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.476090 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.476146 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.476628 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7zsrb" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.490649 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-6rm47"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.491652 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.494139 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.494933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics-certs\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bd76705-44df-4419-a1d4-e294b3d010fd-metallb-excludel2\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzdt5\" (UniqueName: \"kubernetes.io/projected/6bd76705-44df-4419-a1d4-e294b3d010fd-kube-api-access-bzdt5\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495112 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-startup\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrd2\" (UniqueName: \"kubernetes.io/projected/7a21efe0-4145-43ac-9e98-31fecbc074d5-kube-api-access-qnrd2\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495224 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495251 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495276 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-reloader\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.495545 4768 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.495628 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert podName:98a7049b-d1ef-41d1-aa13-62bc2f1657ea nodeName:}" failed. No retries permitted until 2025-11-24 17:04:03.995603287 +0000 UTC m=+725.242571945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert") pod "frr-k8s-webhook-server-6998585d5-2szdn" (UID: "98a7049b-d1ef-41d1-aa13-62bc2f1657ea") : secret "frr-k8s-webhook-server-cert" not found Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-reloader\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76dzv\" (UniqueName: \"kubernetes.io/projected/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-kube-api-access-76dzv\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495947 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-sockets\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.495988 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.496009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-conf\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.496208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-sockets\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.497984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-startup\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.499843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7a21efe0-4145-43ac-9e98-31fecbc074d5-frr-conf\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.507214 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a21efe0-4145-43ac-9e98-31fecbc074d5-metrics-certs\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.510984 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-6rm47"] Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.522007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrd2\" (UniqueName: \"kubernetes.io/projected/7a21efe0-4145-43ac-9e98-31fecbc074d5-kube-api-access-qnrd2\") pod \"frr-k8s-xt7mv\" (UID: \"7a21efe0-4145-43ac-9e98-31fecbc074d5\") " pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.537221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76dzv\" (UniqueName: \"kubernetes.io/projected/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-kube-api-access-76dzv\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597330 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597447 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597504 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-cert\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljp2g\" (UniqueName: \"kubernetes.io/projected/7ec0e305-1a0c-449b-8c6c-9f5930582193-kube-api-access-ljp2g\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bd76705-44df-4419-a1d4-e294b3d010fd-metallb-excludel2\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597660 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzdt5\" (UniqueName: \"kubernetes.io/projected/6bd76705-44df-4419-a1d4-e294b3d010fd-kube-api-access-bzdt5\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.597677 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-metrics-certs\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.599667 4768 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.599732 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs podName:6bd76705-44df-4419-a1d4-e294b3d010fd nodeName:}" failed. No retries permitted until 2025-11-24 17:04:04.099713217 +0000 UTC m=+725.346681875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs") pod "speaker-9m6sf" (UID: "6bd76705-44df-4419-a1d4-e294b3d010fd") : secret "speaker-certs-secret" not found Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.600023 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bd76705-44df-4419-a1d4-e294b3d010fd-metallb-excludel2\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.600605 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 17:04:03 crc kubenswrapper[4768]: E1124 17:04:03.600652 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist podName:6bd76705-44df-4419-a1d4-e294b3d010fd nodeName:}" failed. No retries permitted until 2025-11-24 17:04:04.100640633 +0000 UTC m=+725.347609291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist") pod "speaker-9m6sf" (UID: "6bd76705-44df-4419-a1d4-e294b3d010fd") : secret "metallb-memberlist" not found Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.619905 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzdt5\" (UniqueName: \"kubernetes.io/projected/6bd76705-44df-4419-a1d4-e294b3d010fd-kube-api-access-bzdt5\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.659944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.699591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-cert\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.699931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljp2g\" (UniqueName: \"kubernetes.io/projected/7ec0e305-1a0c-449b-8c6c-9f5930582193-kube-api-access-ljp2g\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.699991 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-metrics-certs\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.701276 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.705396 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-metrics-certs\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.713541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7ec0e305-1a0c-449b-8c6c-9f5930582193-cert\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.718686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljp2g\" (UniqueName: \"kubernetes.io/projected/7ec0e305-1a0c-449b-8c6c-9f5930582193-kube-api-access-ljp2g\") pod \"controller-6c7b4b5f48-6rm47\" (UID: \"7ec0e305-1a0c-449b-8c6c-9f5930582193\") " pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:03 crc kubenswrapper[4768]: I1124 17:04:03.858412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.004560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.008573 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98a7049b-d1ef-41d1-aa13-62bc2f1657ea-cert\") pod \"frr-k8s-webhook-server-6998585d5-2szdn\" (UID: \"98a7049b-d1ef-41d1-aa13-62bc2f1657ea\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.101156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-6rm47"] Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.108124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.108207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:04 crc kubenswrapper[4768]: E1124 17:04:04.108452 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 17:04:04 crc kubenswrapper[4768]: E1124 17:04:04.108537 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist podName:6bd76705-44df-4419-a1d4-e294b3d010fd nodeName:}" failed. No retries permitted until 2025-11-24 17:04:05.108514669 +0000 UTC m=+726.355483317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist") pod "speaker-9m6sf" (UID: "6bd76705-44df-4419-a1d4-e294b3d010fd") : secret "metallb-memberlist" not found Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.111057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-metrics-certs\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:04 crc kubenswrapper[4768]: W1124 17:04:04.116589 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ec0e305_1a0c_449b_8c6c_9f5930582193.slice/crio-2df34a3c4c26b20e80ac29a390f3caae39ee60a07860e563e0185e79f44269ff WatchSource:0}: Error finding container 2df34a3c4c26b20e80ac29a390f3caae39ee60a07860e563e0185e79f44269ff: Status 404 returned error can't find the container with id 2df34a3c4c26b20e80ac29a390f3caae39ee60a07860e563e0185e79f44269ff Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.279220 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.492921 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-2szdn"] Nov 24 17:04:04 crc kubenswrapper[4768]: W1124 17:04:04.498270 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98a7049b_d1ef_41d1_aa13_62bc2f1657ea.slice/crio-214384f46ff21487264aee813cd3698c6490daa422243a89478070ca16d3d486 WatchSource:0}: Error finding container 214384f46ff21487264aee813cd3698c6490daa422243a89478070ca16d3d486: Status 404 returned error can't find the container with id 214384f46ff21487264aee813cd3698c6490daa422243a89478070ca16d3d486 Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.628521 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-6rm47" event={"ID":"7ec0e305-1a0c-449b-8c6c-9f5930582193","Type":"ContainerStarted","Data":"43b7c2f2d33fe6c76db437bf47bbadc88619b6d60ec73149f2fe11cf2f537c8d"} Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.628568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-6rm47" event={"ID":"7ec0e305-1a0c-449b-8c6c-9f5930582193","Type":"ContainerStarted","Data":"2df34a3c4c26b20e80ac29a390f3caae39ee60a07860e563e0185e79f44269ff"} Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.629200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"5981c12a11c53fc6861b3009a02da38678c07e8f271c5bb7559fec68f8e632e8"} Nov 24 17:04:04 crc kubenswrapper[4768]: I1124 17:04:04.629909 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" event={"ID":"98a7049b-d1ef-41d1-aa13-62bc2f1657ea","Type":"ContainerStarted","Data":"214384f46ff21487264aee813cd3698c6490daa422243a89478070ca16d3d486"} Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.123664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.137299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bd76705-44df-4419-a1d4-e294b3d010fd-memberlist\") pod \"speaker-9m6sf\" (UID: \"6bd76705-44df-4419-a1d4-e294b3d010fd\") " pod="metallb-system/speaker-9m6sf" Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.286723 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9m6sf" Nov 24 17:04:05 crc kubenswrapper[4768]: W1124 17:04:05.308522 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bd76705_44df_4419_a1d4_e294b3d010fd.slice/crio-cf30da4f0d02bc56031521ed65a19ac8c2221ef1c50dbecb8c0171eb29ee1c51 WatchSource:0}: Error finding container cf30da4f0d02bc56031521ed65a19ac8c2221ef1c50dbecb8c0171eb29ee1c51: Status 404 returned error can't find the container with id cf30da4f0d02bc56031521ed65a19ac8c2221ef1c50dbecb8c0171eb29ee1c51 Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.649033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-6rm47" event={"ID":"7ec0e305-1a0c-449b-8c6c-9f5930582193","Type":"ContainerStarted","Data":"bdc17739407add582118a030e562bf947558c06811c9c5fb3c8b18e941ff0834"} Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.649154 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.651388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9m6sf" event={"ID":"6bd76705-44df-4419-a1d4-e294b3d010fd","Type":"ContainerStarted","Data":"8f082cab4986640687f3572d7e090f686ae954d8e48505fddac2a5b1edbc78c4"} Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.651416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9m6sf" event={"ID":"6bd76705-44df-4419-a1d4-e294b3d010fd","Type":"ContainerStarted","Data":"cf30da4f0d02bc56031521ed65a19ac8c2221ef1c50dbecb8c0171eb29ee1c51"} Nov 24 17:04:05 crc kubenswrapper[4768]: I1124 17:04:05.666956 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-6rm47" podStartSLOduration=2.6669235589999998 podStartE2EDuration="2.666923559s" podCreationTimestamp="2025-11-24 17:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:04:05.665189311 +0000 UTC m=+726.912157969" watchObservedRunningTime="2025-11-24 17:04:05.666923559 +0000 UTC m=+726.913892207" Nov 24 17:04:06 crc kubenswrapper[4768]: I1124 17:04:06.661957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9m6sf" event={"ID":"6bd76705-44df-4419-a1d4-e294b3d010fd","Type":"ContainerStarted","Data":"26138cf966155f36f9794b8d6daaf6eedacfdafcfaa99b25284e3592c8a41ba5"} Nov 24 17:04:06 crc kubenswrapper[4768]: I1124 17:04:06.662494 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9m6sf" Nov 24 17:04:06 crc kubenswrapper[4768]: I1124 17:04:06.684407 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9m6sf" podStartSLOduration=3.684171575 podStartE2EDuration="3.684171575s" podCreationTimestamp="2025-11-24 17:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:04:06.681999424 +0000 UTC m=+727.928968082" watchObservedRunningTime="2025-11-24 17:04:06.684171575 +0000 UTC m=+727.931140233" Nov 24 17:04:11 crc kubenswrapper[4768]: I1124 17:04:11.700413 4768 generic.go:334] "Generic (PLEG): container finished" podID="7a21efe0-4145-43ac-9e98-31fecbc074d5" containerID="38d9593d9bb910df6575c60f809cafd45883003bcbc398fcadd086768c9ac934" exitCode=0 Nov 24 17:04:11 crc kubenswrapper[4768]: I1124 17:04:11.700482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerDied","Data":"38d9593d9bb910df6575c60f809cafd45883003bcbc398fcadd086768c9ac934"} Nov 24 17:04:11 crc kubenswrapper[4768]: I1124 17:04:11.703265 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" event={"ID":"98a7049b-d1ef-41d1-aa13-62bc2f1657ea","Type":"ContainerStarted","Data":"408de64a85880a783ab6a0bb8d1198670450af77d52e04e2711304341757caaa"} Nov 24 17:04:11 crc kubenswrapper[4768]: I1124 17:04:11.703842 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:11 crc kubenswrapper[4768]: I1124 17:04:11.752456 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" podStartSLOduration=1.9537604929999999 podStartE2EDuration="8.75242708s" podCreationTimestamp="2025-11-24 17:04:03 +0000 UTC" firstStartedPulling="2025-11-24 17:04:04.516648958 +0000 UTC m=+725.763617616" lastFinishedPulling="2025-11-24 17:04:11.315315545 +0000 UTC m=+732.562284203" observedRunningTime="2025-11-24 17:04:11.748373177 +0000 UTC m=+732.995341875" watchObservedRunningTime="2025-11-24 17:04:11.75242708 +0000 UTC m=+732.999395778" Nov 24 17:04:12 crc kubenswrapper[4768]: I1124 17:04:12.712684 4768 generic.go:334] "Generic (PLEG): container finished" podID="7a21efe0-4145-43ac-9e98-31fecbc074d5" containerID="266eba84e4009dfaa00f198488f91e672c01370997c8e41fcaf462de3c94295a" exitCode=0 Nov 24 17:04:12 crc kubenswrapper[4768]: I1124 17:04:12.712759 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerDied","Data":"266eba84e4009dfaa00f198488f91e672c01370997c8e41fcaf462de3c94295a"} Nov 24 17:04:13 crc kubenswrapper[4768]: I1124 17:04:13.723660 4768 generic.go:334] "Generic (PLEG): container finished" podID="7a21efe0-4145-43ac-9e98-31fecbc074d5" containerID="ed7e418e7a5478e82e1c2b16ef497602676fb1f370d4891ef604d98bc27c8ea3" exitCode=0 Nov 24 17:04:13 crc kubenswrapper[4768]: I1124 17:04:13.723741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerDied","Data":"ed7e418e7a5478e82e1c2b16ef497602676fb1f370d4891ef604d98bc27c8ea3"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"b9f5080e880ef0fcf46bb0a7801750593b87df0c32fb6bf7f7d24f2ddae917c8"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"545ece66b971dccb380a39e8b6ee69d601eedb9af3da9b76217faffd2038cd92"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737731 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"fcc30f51ad2b5626921961a358020e3a51e313ac253b812b0d44dbbaf093444f"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"fee0310e77390ca89ef9947fbc49ddcb44a7c7153c70a6198ddf21533b1dced0"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737760 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"28a7cea0c2158e4f142755e7b47ac568cc151adcb07ec75addde46c4f40e974a"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737782 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xt7mv" event={"ID":"7a21efe0-4145-43ac-9e98-31fecbc074d5","Type":"ContainerStarted","Data":"0ae3a3240376e8e7a1b40e14b1114eb945915b91cf423ad8a2da0ac8a6cc9908"} Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.737905 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:14 crc kubenswrapper[4768]: I1124 17:04:14.771122 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xt7mv" podStartSLOduration=4.376241482 podStartE2EDuration="11.771096696s" podCreationTimestamp="2025-11-24 17:04:03 +0000 UTC" firstStartedPulling="2025-11-24 17:04:03.921110089 +0000 UTC m=+725.168078747" lastFinishedPulling="2025-11-24 17:04:11.315965283 +0000 UTC m=+732.562933961" observedRunningTime="2025-11-24 17:04:14.765900011 +0000 UTC m=+736.012868689" watchObservedRunningTime="2025-11-24 17:04:14.771096696 +0000 UTC m=+736.018065374" Nov 24 17:04:15 crc kubenswrapper[4768]: I1124 17:04:15.291062 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9m6sf" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.213397 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.214460 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.215428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.220701 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.221473 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.222082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-24czj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.238029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gshz\" (UniqueName: \"kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz\") pod \"openstack-operator-index-c9qnj\" (UID: \"4afadab6-7579-46d8-9327-4ce4107a34d0\") " pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.338839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gshz\" (UniqueName: \"kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz\") pod \"openstack-operator-index-c9qnj\" (UID: \"4afadab6-7579-46d8-9327-4ce4107a34d0\") " pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.357321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gshz\" (UniqueName: \"kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz\") pod \"openstack-operator-index-c9qnj\" (UID: \"4afadab6-7579-46d8-9327-4ce4107a34d0\") " pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.537266 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.660848 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.715840 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:18 crc kubenswrapper[4768]: I1124 17:04:18.783458 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:19 crc kubenswrapper[4768]: I1124 17:04:19.792775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9qnj" event={"ID":"4afadab6-7579-46d8-9327-4ce4107a34d0","Type":"ContainerStarted","Data":"531e311ca879f17181b7c5ca81eadceb1d2f4650e18e8b5ecc24b8535b3acb5a"} Nov 24 17:04:21 crc kubenswrapper[4768]: I1124 17:04:21.598584 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.204589 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-777rr"] Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.215438 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.228331 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-777rr"] Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.313764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz5q\" (UniqueName: \"kubernetes.io/projected/87ecddb5-623c-40cb-ba80-c869cea78856-kube-api-access-9pz5q\") pod \"openstack-operator-index-777rr\" (UID: \"87ecddb5-623c-40cb-ba80-c869cea78856\") " pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.415073 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz5q\" (UniqueName: \"kubernetes.io/projected/87ecddb5-623c-40cb-ba80-c869cea78856-kube-api-access-9pz5q\") pod \"openstack-operator-index-777rr\" (UID: \"87ecddb5-623c-40cb-ba80-c869cea78856\") " pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.434656 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz5q\" (UniqueName: \"kubernetes.io/projected/87ecddb5-623c-40cb-ba80-c869cea78856-kube-api-access-9pz5q\") pod \"openstack-operator-index-777rr\" (UID: \"87ecddb5-623c-40cb-ba80-c869cea78856\") " pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.545931 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.822680 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9qnj" event={"ID":"4afadab6-7579-46d8-9327-4ce4107a34d0","Type":"ContainerStarted","Data":"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c"} Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.822769 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-c9qnj" podUID="4afadab6-7579-46d8-9327-4ce4107a34d0" containerName="registry-server" containerID="cri-o://f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c" gracePeriod=2 Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.850747 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-c9qnj" podStartSLOduration=1.394682596 podStartE2EDuration="4.850714214s" podCreationTimestamp="2025-11-24 17:04:18 +0000 UTC" firstStartedPulling="2025-11-24 17:04:18.777542086 +0000 UTC m=+740.024510744" lastFinishedPulling="2025-11-24 17:04:22.233573664 +0000 UTC m=+743.480542362" observedRunningTime="2025-11-24 17:04:22.844205833 +0000 UTC m=+744.091174561" watchObservedRunningTime="2025-11-24 17:04:22.850714214 +0000 UTC m=+744.097682912" Nov 24 17:04:22 crc kubenswrapper[4768]: I1124 17:04:22.865716 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-777rr"] Nov 24 17:04:22 crc kubenswrapper[4768]: W1124 17:04:22.910595 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87ecddb5_623c_40cb_ba80_c869cea78856.slice/crio-1d8e1ae8107d7087affea7c64e24dc451a9c946a760a7de7af5fea2a50b6b00d WatchSource:0}: Error finding container 1d8e1ae8107d7087affea7c64e24dc451a9c946a760a7de7af5fea2a50b6b00d: Status 404 returned error can't find the container with id 1d8e1ae8107d7087affea7c64e24dc451a9c946a760a7de7af5fea2a50b6b00d Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.161233 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.228968 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gshz\" (UniqueName: \"kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz\") pod \"4afadab6-7579-46d8-9327-4ce4107a34d0\" (UID: \"4afadab6-7579-46d8-9327-4ce4107a34d0\") " Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.238845 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz" (OuterVolumeSpecName: "kube-api-access-2gshz") pod "4afadab6-7579-46d8-9327-4ce4107a34d0" (UID: "4afadab6-7579-46d8-9327-4ce4107a34d0"). InnerVolumeSpecName "kube-api-access-2gshz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.330898 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gshz\" (UniqueName: \"kubernetes.io/projected/4afadab6-7579-46d8-9327-4ce4107a34d0-kube-api-access-2gshz\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.664581 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xt7mv" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.834625 4768 generic.go:334] "Generic (PLEG): container finished" podID="4afadab6-7579-46d8-9327-4ce4107a34d0" containerID="f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c" exitCode=0 Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.834745 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9qnj" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.835420 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9qnj" event={"ID":"4afadab6-7579-46d8-9327-4ce4107a34d0","Type":"ContainerDied","Data":"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c"} Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.835520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9qnj" event={"ID":"4afadab6-7579-46d8-9327-4ce4107a34d0","Type":"ContainerDied","Data":"531e311ca879f17181b7c5ca81eadceb1d2f4650e18e8b5ecc24b8535b3acb5a"} Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.835563 4768 scope.go:117] "RemoveContainer" containerID="f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.839419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-777rr" event={"ID":"87ecddb5-623c-40cb-ba80-c869cea78856","Type":"ContainerStarted","Data":"5facfac0de87cdb1caccecfd6e60f22d3354a7109c94beed0c0bac02c7492e4c"} Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.839469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-777rr" event={"ID":"87ecddb5-623c-40cb-ba80-c869cea78856","Type":"ContainerStarted","Data":"1d8e1ae8107d7087affea7c64e24dc451a9c946a760a7de7af5fea2a50b6b00d"} Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.869686 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-6rm47" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.877008 4768 scope.go:117] "RemoveContainer" containerID="f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c" Nov 24 17:04:23 crc kubenswrapper[4768]: E1124 17:04:23.880907 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c\": container with ID starting with f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c not found: ID does not exist" containerID="f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.880957 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c"} err="failed to get container status \"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c\": rpc error: code = NotFound desc = could not find container \"f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c\": container with ID starting with f403cdc227e274a0cd740cbabc04de4702f4e6a699978579b0d92ed93ad6388c not found: ID does not exist" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.908844 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-777rr" podStartSLOduration=1.862858708 podStartE2EDuration="1.908814798s" podCreationTimestamp="2025-11-24 17:04:22 +0000 UTC" firstStartedPulling="2025-11-24 17:04:22.914511331 +0000 UTC m=+744.161479999" lastFinishedPulling="2025-11-24 17:04:22.960467431 +0000 UTC m=+744.207436089" observedRunningTime="2025-11-24 17:04:23.87120037 +0000 UTC m=+745.118169068" watchObservedRunningTime="2025-11-24 17:04:23.908814798 +0000 UTC m=+745.155783496" Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.931935 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:23 crc kubenswrapper[4768]: I1124 17:04:23.935955 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-c9qnj"] Nov 24 17:04:24 crc kubenswrapper[4768]: I1124 17:04:24.283209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-2szdn" Nov 24 17:04:25 crc kubenswrapper[4768]: I1124 17:04:25.596934 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afadab6-7579-46d8-9327-4ce4107a34d0" path="/var/lib/kubelet/pods/4afadab6-7579-46d8-9327-4ce4107a34d0/volumes" Nov 24 17:04:28 crc kubenswrapper[4768]: I1124 17:04:28.872587 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 17:04:28 crc kubenswrapper[4768]: I1124 17:04:28.873178 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" containerID="cri-o://57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a" gracePeriod=30 Nov 24 17:04:28 crc kubenswrapper[4768]: I1124 17:04:28.973195 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 17:04:28 crc kubenswrapper[4768]: I1124 17:04:28.973407 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" podUID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" containerName="route-controller-manager" containerID="cri-o://5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87" gracePeriod=30 Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.228001 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.299860 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426735 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c79p\" (UniqueName: \"kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p\") pod \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config\") pod \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cp8w\" (UniqueName: \"kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w\") pod \"76f7811c-28c6-4764-b44a-07cbfdb400c4\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert\") pod \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert\") pod \"76f7811c-28c6-4764-b44a-07cbfdb400c4\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles\") pod \"76f7811c-28c6-4764-b44a-07cbfdb400c4\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca\") pod \"76f7811c-28c6-4764-b44a-07cbfdb400c4\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.426948 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config\") pod \"76f7811c-28c6-4764-b44a-07cbfdb400c4\" (UID: \"76f7811c-28c6-4764-b44a-07cbfdb400c4\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.427799 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca\") pod \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\" (UID: \"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d\") " Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.427707 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76f7811c-28c6-4764-b44a-07cbfdb400c4" (UID: "76f7811c-28c6-4764-b44a-07cbfdb400c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.427721 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "76f7811c-28c6-4764-b44a-07cbfdb400c4" (UID: "76f7811c-28c6-4764-b44a-07cbfdb400c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.427827 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config" (OuterVolumeSpecName: "config") pod "76f7811c-28c6-4764-b44a-07cbfdb400c4" (UID: "76f7811c-28c6-4764-b44a-07cbfdb400c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" (UID: "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config" (OuterVolumeSpecName: "config") pod "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" (UID: "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428532 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428549 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428564 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428575 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76f7811c-28c6-4764-b44a-07cbfdb400c4-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.428587 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.432714 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p" (OuterVolumeSpecName: "kube-api-access-2c79p") pod "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" (UID: "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d"). InnerVolumeSpecName "kube-api-access-2c79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.433038 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w" (OuterVolumeSpecName: "kube-api-access-6cp8w") pod "76f7811c-28c6-4764-b44a-07cbfdb400c4" (UID: "76f7811c-28c6-4764-b44a-07cbfdb400c4"). InnerVolumeSpecName "kube-api-access-6cp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.433057 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76f7811c-28c6-4764-b44a-07cbfdb400c4" (UID: "76f7811c-28c6-4764-b44a-07cbfdb400c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.433106 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" (UID: "c2dfd5ab-15ef-445c-954a-5e5ebe90a95d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.530306 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c79p\" (UniqueName: \"kubernetes.io/projected/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-kube-api-access-2c79p\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.530341 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cp8w\" (UniqueName: \"kubernetes.io/projected/76f7811c-28c6-4764-b44a-07cbfdb400c4-kube-api-access-6cp8w\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.530364 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.530375 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76f7811c-28c6-4764-b44a-07cbfdb400c4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.897597 4768 generic.go:334] "Generic (PLEG): container finished" podID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerID="57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a" exitCode=0 Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.897651 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" event={"ID":"76f7811c-28c6-4764-b44a-07cbfdb400c4","Type":"ContainerDied","Data":"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a"} Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.897698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" event={"ID":"76f7811c-28c6-4764-b44a-07cbfdb400c4","Type":"ContainerDied","Data":"883ffa210144f79c2a2208615710ce575b7a1b1c54fc7a2c26331b21cfea5de0"} Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.897716 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-kbq4r" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.897727 4768 scope.go:117] "RemoveContainer" containerID="57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.903308 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" containerID="5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87" exitCode=0 Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.903366 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" event={"ID":"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d","Type":"ContainerDied","Data":"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87"} Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.903393 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" event={"ID":"c2dfd5ab-15ef-445c-954a-5e5ebe90a95d","Type":"ContainerDied","Data":"d7e67419bafdb55b177f08cb90adf079fa249c104d5fb05c1d95faeb7c805099"} Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.903810 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.932602 4768 scope.go:117] "RemoveContainer" containerID="57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a" Nov 24 17:04:29 crc kubenswrapper[4768]: E1124 17:04:29.933373 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a\": container with ID starting with 57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a not found: ID does not exist" containerID="57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.933405 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a"} err="failed to get container status \"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a\": rpc error: code = NotFound desc = could not find container \"57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a\": container with ID starting with 57da5aca068148a194f423e041a9747cb57be073aa925e6fd67d49fbe083ce4a not found: ID does not exist" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.933429 4768 scope.go:117] "RemoveContainer" containerID="5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.934749 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.942995 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-kbq4r"] Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.949559 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.955294 4768 scope.go:117] "RemoveContainer" containerID="5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.955528 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzf64"] Nov 24 17:04:29 crc kubenswrapper[4768]: E1124 17:04:29.956171 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87\": container with ID starting with 5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87 not found: ID does not exist" containerID="5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87" Nov 24 17:04:29 crc kubenswrapper[4768]: I1124 17:04:29.956210 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87"} err="failed to get container status \"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87\": rpc error: code = NotFound desc = could not find container \"5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87\": container with ID starting with 5f75c2c8071c91c372017515de31aaca7b416e863a77ecde4e03e46594604c87 not found: ID does not exist" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.113968 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857"] Nov 24 17:04:30 crc kubenswrapper[4768]: E1124 17:04:30.114630 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114657 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: E1124 17:04:30.114674 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" containerName="route-controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114684 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" containerName="route-controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: E1124 17:04:30.114698 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afadab6-7579-46d8-9327-4ce4107a34d0" containerName="registry-server" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114706 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afadab6-7579-46d8-9327-4ce4107a34d0" containerName="registry-server" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114847 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" containerName="route-controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114878 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afadab6-7579-46d8-9327-4ce4107a34d0" containerName="registry-server" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.114888 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" containerName="controller-manager" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.115540 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.118339 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.118458 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.118485 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.118469 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.118503 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.120708 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.121723 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857"] Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.147288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt4c\" (UniqueName: \"kubernetes.io/projected/de485e08-e15a-402d-81c7-f4b591f69b98-kube-api-access-cnt4c\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.147392 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-config\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.147425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de485e08-e15a-402d-81c7-f4b591f69b98-serving-cert\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.147452 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-client-ca\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.248515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnt4c\" (UniqueName: \"kubernetes.io/projected/de485e08-e15a-402d-81c7-f4b591f69b98-kube-api-access-cnt4c\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.248580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-config\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.248607 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de485e08-e15a-402d-81c7-f4b591f69b98-serving-cert\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.248631 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-client-ca\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.249549 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-client-ca\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.249853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de485e08-e15a-402d-81c7-f4b591f69b98-config\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.257302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de485e08-e15a-402d-81c7-f4b591f69b98-serving-cert\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.284267 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnt4c\" (UniqueName: \"kubernetes.io/projected/de485e08-e15a-402d-81c7-f4b591f69b98-kube-api-access-cnt4c\") pod \"route-controller-manager-857cd9b856-dw857\" (UID: \"de485e08-e15a-402d-81c7-f4b591f69b98\") " pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.445445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.711492 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857"] Nov 24 17:04:30 crc kubenswrapper[4768]: W1124 17:04:30.711924 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde485e08_e15a_402d_81c7_f4b591f69b98.slice/crio-fa38df9d493207b8e462b1a992b061f0709eb6ab45ac52c97fa0d9805801b102 WatchSource:0}: Error finding container fa38df9d493207b8e462b1a992b061f0709eb6ab45ac52c97fa0d9805801b102: Status 404 returned error can't find the container with id fa38df9d493207b8e462b1a992b061f0709eb6ab45ac52c97fa0d9805801b102 Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.785630 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-597665896-wrqgd"] Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.786634 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.791941 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.792961 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.793314 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.794198 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.794268 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.794432 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.800865 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-597665896-wrqgd"] Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.802087 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.857042 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-proxy-ca-bundles\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.857128 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-client-ca\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.857190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-config\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.857465 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-serving-cert\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.857531 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbzg\" (UniqueName: \"kubernetes.io/projected/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-kube-api-access-7jbzg\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.913405 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" event={"ID":"de485e08-e15a-402d-81c7-f4b591f69b98","Type":"ContainerStarted","Data":"ece4ff36daf083ea1107eb77785f8d96a7d63469b1b078cd69b6cff539b812c1"} Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.913456 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" event={"ID":"de485e08-e15a-402d-81c7-f4b591f69b98","Type":"ContainerStarted","Data":"fa38df9d493207b8e462b1a992b061f0709eb6ab45ac52c97fa0d9805801b102"} Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.914685 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.920952 4768 patch_prober.go:28] interesting pod/route-controller-manager-857cd9b856-dw857 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" start-of-body= Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.921004 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" podUID="de485e08-e15a-402d-81c7-f4b591f69b98" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.53:8443/healthz\": dial tcp 10.217.0.53:8443: connect: connection refused" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.939228 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" podStartSLOduration=0.939209557 podStartE2EDuration="939.209557ms" podCreationTimestamp="2025-11-24 17:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:04:30.933754965 +0000 UTC m=+752.180723633" watchObservedRunningTime="2025-11-24 17:04:30.939209557 +0000 UTC m=+752.186178225" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.958589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-proxy-ca-bundles\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.958672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-client-ca\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.958718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-config\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.958807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-serving-cert\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.958864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbzg\" (UniqueName: \"kubernetes.io/projected/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-kube-api-access-7jbzg\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.960433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-client-ca\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.960501 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-proxy-ca-bundles\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.960834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-config\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.963848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-serving-cert\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:30 crc kubenswrapper[4768]: I1124 17:04:30.979045 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbzg\" (UniqueName: \"kubernetes.io/projected/29f8b9c4-d13e-4e63-98ca-54a3f09074bd-kube-api-access-7jbzg\") pod \"controller-manager-597665896-wrqgd\" (UID: \"29f8b9c4-d13e-4e63-98ca-54a3f09074bd\") " pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.112523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.340245 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-597665896-wrqgd"] Nov 24 17:04:31 crc kubenswrapper[4768]: W1124 17:04:31.347913 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29f8b9c4_d13e_4e63_98ca_54a3f09074bd.slice/crio-1d8250e9e0950fc3973de718357f7f50f81823d64a450f05030e3c6eeeeb4613 WatchSource:0}: Error finding container 1d8250e9e0950fc3973de718357f7f50f81823d64a450f05030e3c6eeeeb4613: Status 404 returned error can't find the container with id 1d8250e9e0950fc3973de718357f7f50f81823d64a450f05030e3c6eeeeb4613 Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.589206 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76f7811c-28c6-4764-b44a-07cbfdb400c4" path="/var/lib/kubelet/pods/76f7811c-28c6-4764-b44a-07cbfdb400c4/volumes" Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.590055 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2dfd5ab-15ef-445c-954a-5e5ebe90a95d" path="/var/lib/kubelet/pods/c2dfd5ab-15ef-445c-954a-5e5ebe90a95d/volumes" Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.920975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" event={"ID":"29f8b9c4-d13e-4e63-98ca-54a3f09074bd","Type":"ContainerStarted","Data":"19e7e3bf7a129f2f44ab59221736b54ace3f25931c5c4e8ad52fceb0fca11285"} Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.921041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" event={"ID":"29f8b9c4-d13e-4e63-98ca-54a3f09074bd","Type":"ContainerStarted","Data":"1d8250e9e0950fc3973de718357f7f50f81823d64a450f05030e3c6eeeeb4613"} Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.927187 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-857cd9b856-dw857" Nov 24 17:04:31 crc kubenswrapper[4768]: I1124 17:04:31.939959 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" podStartSLOduration=2.939936713 podStartE2EDuration="2.939936713s" podCreationTimestamp="2025-11-24 17:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:04:31.936756734 +0000 UTC m=+753.183725432" watchObservedRunningTime="2025-11-24 17:04:31.939936713 +0000 UTC m=+753.186905391" Nov 24 17:04:32 crc kubenswrapper[4768]: I1124 17:04:32.546854 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:32 crc kubenswrapper[4768]: I1124 17:04:32.546928 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:32 crc kubenswrapper[4768]: I1124 17:04:32.591603 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:32 crc kubenswrapper[4768]: I1124 17:04:32.930170 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:32 crc kubenswrapper[4768]: I1124 17:04:32.933659 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-597665896-wrqgd" Nov 24 17:04:33 crc kubenswrapper[4768]: I1124 17:04:33.017442 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-777rr" Nov 24 17:04:36 crc kubenswrapper[4768]: I1124 17:04:36.056801 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.715032 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz"] Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.716838 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.719901 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-l472n" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.737982 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz"] Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.817407 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cknnp\" (UniqueName: \"kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.817547 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.817647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.919078 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cknnp\" (UniqueName: \"kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.919184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.919263 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.919785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.919989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:38 crc kubenswrapper[4768]: I1124 17:04:38.942209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cknnp\" (UniqueName: \"kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp\") pod \"3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:39 crc kubenswrapper[4768]: I1124 17:04:39.039339 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:39 crc kubenswrapper[4768]: I1124 17:04:39.586911 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz"] Nov 24 17:04:39 crc kubenswrapper[4768]: I1124 17:04:39.993654 4768 generic.go:334] "Generic (PLEG): container finished" podID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerID="584d50092874661a84fd5b30b64ab0254bddbd7d04628f1496e3c63fbe2f1e84" exitCode=0 Nov 24 17:04:39 crc kubenswrapper[4768]: I1124 17:04:39.993761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" event={"ID":"3986c16f-d992-4d26-9f12-0892ffc031d6","Type":"ContainerDied","Data":"584d50092874661a84fd5b30b64ab0254bddbd7d04628f1496e3c63fbe2f1e84"} Nov 24 17:04:39 crc kubenswrapper[4768]: I1124 17:04:39.994008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" event={"ID":"3986c16f-d992-4d26-9f12-0892ffc031d6","Type":"ContainerStarted","Data":"004c9f7b29775a9f6c0c08d56ec645493abbe9acf5a6c6d685dbb1e3ac7d99e5"} Nov 24 17:04:42 crc kubenswrapper[4768]: I1124 17:04:42.011098 4768 generic.go:334] "Generic (PLEG): container finished" podID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerID="59c12e36c9ce8aac29086c6e50258159ebd45d43a09a7db6ca0f5d7683e4f7e3" exitCode=0 Nov 24 17:04:42 crc kubenswrapper[4768]: I1124 17:04:42.011194 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" event={"ID":"3986c16f-d992-4d26-9f12-0892ffc031d6","Type":"ContainerDied","Data":"59c12e36c9ce8aac29086c6e50258159ebd45d43a09a7db6ca0f5d7683e4f7e3"} Nov 24 17:04:43 crc kubenswrapper[4768]: I1124 17:04:43.022044 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" event={"ID":"3986c16f-d992-4d26-9f12-0892ffc031d6","Type":"ContainerDied","Data":"158f05762a63d215604e271667bb749258c99bef936acd16ad44b15b2e34c254"} Nov 24 17:04:43 crc kubenswrapper[4768]: I1124 17:04:43.021926 4768 generic.go:334] "Generic (PLEG): container finished" podID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerID="158f05762a63d215604e271667bb749258c99bef936acd16ad44b15b2e34c254" exitCode=0 Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.500899 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.699886 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cknnp\" (UniqueName: \"kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp\") pod \"3986c16f-d992-4d26-9f12-0892ffc031d6\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.699972 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle\") pod \"3986c16f-d992-4d26-9f12-0892ffc031d6\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.700293 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util\") pod \"3986c16f-d992-4d26-9f12-0892ffc031d6\" (UID: \"3986c16f-d992-4d26-9f12-0892ffc031d6\") " Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.702031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle" (OuterVolumeSpecName: "bundle") pod "3986c16f-d992-4d26-9f12-0892ffc031d6" (UID: "3986c16f-d992-4d26-9f12-0892ffc031d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.709561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp" (OuterVolumeSpecName: "kube-api-access-cknnp") pod "3986c16f-d992-4d26-9f12-0892ffc031d6" (UID: "3986c16f-d992-4d26-9f12-0892ffc031d6"). InnerVolumeSpecName "kube-api-access-cknnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.733024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util" (OuterVolumeSpecName: "util") pod "3986c16f-d992-4d26-9f12-0892ffc031d6" (UID: "3986c16f-d992-4d26-9f12-0892ffc031d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.807625 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-util\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.807685 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cknnp\" (UniqueName: \"kubernetes.io/projected/3986c16f-d992-4d26-9f12-0892ffc031d6-kube-api-access-cknnp\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:44 crc kubenswrapper[4768]: I1124 17:04:44.807703 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3986c16f-d992-4d26-9f12-0892ffc031d6-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:04:45 crc kubenswrapper[4768]: I1124 17:04:45.045030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" event={"ID":"3986c16f-d992-4d26-9f12-0892ffc031d6","Type":"ContainerDied","Data":"004c9f7b29775a9f6c0c08d56ec645493abbe9acf5a6c6d685dbb1e3ac7d99e5"} Nov 24 17:04:45 crc kubenswrapper[4768]: I1124 17:04:45.045094 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz" Nov 24 17:04:45 crc kubenswrapper[4768]: I1124 17:04:45.045117 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="004c9f7b29775a9f6c0c08d56ec645493abbe9acf5a6c6d685dbb1e3ac7d99e5" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.945745 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk"] Nov 24 17:04:50 crc kubenswrapper[4768]: E1124 17:04:50.946615 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="util" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.946631 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="util" Nov 24 17:04:50 crc kubenswrapper[4768]: E1124 17:04:50.946657 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="extract" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.946664 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="extract" Nov 24 17:04:50 crc kubenswrapper[4768]: E1124 17:04:50.946679 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="pull" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.946687 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="pull" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.946812 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3986c16f-d992-4d26-9f12-0892ffc031d6" containerName="extract" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.947410 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.949426 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-h6ldr" Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.975065 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk"] Nov 24 17:04:50 crc kubenswrapper[4768]: I1124 17:04:50.994123 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5c78\" (UniqueName: \"kubernetes.io/projected/24c6b375-70f7-4954-9f65-4e3dcf12de68-kube-api-access-m5c78\") pod \"openstack-operator-controller-operator-849cb45cff-pvcvk\" (UID: \"24c6b375-70f7-4954-9f65-4e3dcf12de68\") " pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:51 crc kubenswrapper[4768]: I1124 17:04:51.094829 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5c78\" (UniqueName: \"kubernetes.io/projected/24c6b375-70f7-4954-9f65-4e3dcf12de68-kube-api-access-m5c78\") pod \"openstack-operator-controller-operator-849cb45cff-pvcvk\" (UID: \"24c6b375-70f7-4954-9f65-4e3dcf12de68\") " pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:51 crc kubenswrapper[4768]: I1124 17:04:51.111894 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5c78\" (UniqueName: \"kubernetes.io/projected/24c6b375-70f7-4954-9f65-4e3dcf12de68-kube-api-access-m5c78\") pod \"openstack-operator-controller-operator-849cb45cff-pvcvk\" (UID: \"24c6b375-70f7-4954-9f65-4e3dcf12de68\") " pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:51 crc kubenswrapper[4768]: I1124 17:04:51.266527 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:51 crc kubenswrapper[4768]: I1124 17:04:51.759417 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk"] Nov 24 17:04:52 crc kubenswrapper[4768]: I1124 17:04:52.091154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" event={"ID":"24c6b375-70f7-4954-9f65-4e3dcf12de68","Type":"ContainerStarted","Data":"9def6bc6c01ec479f186d515003ba600642bded5f54bd6c5354b2773e401d38e"} Nov 24 17:04:56 crc kubenswrapper[4768]: I1124 17:04:56.136254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" event={"ID":"24c6b375-70f7-4954-9f65-4e3dcf12de68","Type":"ContainerStarted","Data":"cbd17e6362a85adf57b3e9bec6d89863aae0e998371d7bc4d8b4f05390c6325a"} Nov 24 17:04:56 crc kubenswrapper[4768]: I1124 17:04:56.136634 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:04:56 crc kubenswrapper[4768]: I1124 17:04:56.170589 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" podStartSLOduration=2.256318965 podStartE2EDuration="6.170557927s" podCreationTimestamp="2025-11-24 17:04:50 +0000 UTC" firstStartedPulling="2025-11-24 17:04:51.770215515 +0000 UTC m=+773.017184173" lastFinishedPulling="2025-11-24 17:04:55.684454477 +0000 UTC m=+776.931423135" observedRunningTime="2025-11-24 17:04:56.161079393 +0000 UTC m=+777.408048111" watchObservedRunningTime="2025-11-24 17:04:56.170557927 +0000 UTC m=+777.417526605" Nov 24 17:05:01 crc kubenswrapper[4768]: I1124 17:05:01.270712 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-849cb45cff-pvcvk" Nov 24 17:05:04 crc kubenswrapper[4768]: I1124 17:05:04.892957 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:05:04 crc kubenswrapper[4768]: I1124 17:05:04.893270 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:05:13 crc kubenswrapper[4768]: I1124 17:05:13.908810 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:13 crc kubenswrapper[4768]: I1124 17:05:13.910518 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:13 crc kubenswrapper[4768]: I1124 17:05:13.919611 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.036324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.036844 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.036871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwm4\" (UniqueName: \"kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.138207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.138258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.138278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwm4\" (UniqueName: \"kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.138835 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.138925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.163470 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwm4\" (UniqueName: \"kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4\") pod \"redhat-marketplace-twkb6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.260706 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:14 crc kubenswrapper[4768]: I1124 17:05:14.734161 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:14 crc kubenswrapper[4768]: W1124 17:05:14.742114 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d811f60_5a0c_4baa_bfa7_4a3e2a5cc2c6.slice/crio-76e454732354ddbeb81a19b5716d56513cc69716303fcd71894e2f5aafc5dfc1 WatchSource:0}: Error finding container 76e454732354ddbeb81a19b5716d56513cc69716303fcd71894e2f5aafc5dfc1: Status 404 returned error can't find the container with id 76e454732354ddbeb81a19b5716d56513cc69716303fcd71894e2f5aafc5dfc1 Nov 24 17:05:15 crc kubenswrapper[4768]: I1124 17:05:15.251797 4768 generic.go:334] "Generic (PLEG): container finished" podID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerID="52a38bf2527a01c3da924a907304483532c3d0cf9945844731ac744ee2bc9080" exitCode=0 Nov 24 17:05:15 crc kubenswrapper[4768]: I1124 17:05:15.251896 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerDied","Data":"52a38bf2527a01c3da924a907304483532c3d0cf9945844731ac744ee2bc9080"} Nov 24 17:05:15 crc kubenswrapper[4768]: I1124 17:05:15.252169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerStarted","Data":"76e454732354ddbeb81a19b5716d56513cc69716303fcd71894e2f5aafc5dfc1"} Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.054409 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.056308 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.064107 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-r4r5f" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.070783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.077051 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.079929 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.088254 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-h62r7" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.119492 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.134295 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.136276 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.153806 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tvbsf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.160809 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.167595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p9kt\" (UniqueName: \"kubernetes.io/projected/61f1ba78-cd9d-4202-9463-f7a4c5cc9092-kube-api-access-4p9kt\") pod \"barbican-operator-controller-manager-86dc4d89c8-4xg49\" (UID: \"61f1ba78-cd9d-4202-9463-f7a4c5cc9092\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.180839 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.181900 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.183479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-54ldc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.203472 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-6smrr"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.209049 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-6smrr"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.209156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.214752 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gd887" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.220278 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.221604 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.223966 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-knt44" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.238886 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.253066 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269049 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmgb\" (UniqueName: \"kubernetes.io/projected/1a60eac6-e17c-4621-9367-3d1b60aab811-kube-api-access-8nmgb\") pod \"cinder-operator-controller-manager-79856dc55c-gnzjb\" (UID: \"1a60eac6-e17c-4621-9367-3d1b60aab811\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269124 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p9kt\" (UniqueName: \"kubernetes.io/projected/61f1ba78-cd9d-4202-9463-f7a4c5cc9092-kube-api-access-4p9kt\") pod \"barbican-operator-controller-manager-86dc4d89c8-4xg49\" (UID: \"61f1ba78-cd9d-4202-9463-f7a4c5cc9092\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84s2\" (UniqueName: \"kubernetes.io/projected/f7e72195-5597-498f-906e-573b0c5c8295-kube-api-access-c84s2\") pod \"horizon-operator-controller-manager-68c9694994-jfk9g\" (UID: \"f7e72195-5597-498f-906e-573b0c5c8295\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qgk\" (UniqueName: \"kubernetes.io/projected/f5b8ba2f-084a-4285-938b-5ffe669a9250-kube-api-access-j2qgk\") pod \"designate-operator-controller-manager-7d695c9b56-jdszs\" (UID: \"f5b8ba2f-084a-4285-938b-5ffe669a9250\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27p7\" (UniqueName: \"kubernetes.io/projected/d35343f5-188c-4787-9002-125c9e597e80-kube-api-access-z27p7\") pod \"heat-operator-controller-manager-774b86978c-6smrr\" (UID: \"d35343f5-188c-4787-9002-125c9e597e80\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.269228 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qjzq\" (UniqueName: \"kubernetes.io/projected/db716c0e-bc96-4eaa-af75-184cd71e8124-kube-api-access-6qjzq\") pod \"glance-operator-controller-manager-68b95954c9-fxzrc\" (UID: \"db716c0e-bc96-4eaa-af75-184cd71e8124\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.270175 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.271249 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.288749 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.288956 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2bckw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.290294 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.291335 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.291500 4768 generic.go:334] "Generic (PLEG): container finished" podID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerID="27bee47c48112b79835448a0cecdd744ec60239818a2e3acc1a6118ec74146c4" exitCode=0 Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.291533 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerDied","Data":"27bee47c48112b79835448a0cecdd744ec60239818a2e3acc1a6118ec74146c4"} Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.302834 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vqt9l" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.331050 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p9kt\" (UniqueName: \"kubernetes.io/projected/61f1ba78-cd9d-4202-9463-f7a4c5cc9092-kube-api-access-4p9kt\") pod \"barbican-operator-controller-manager-86dc4d89c8-4xg49\" (UID: \"61f1ba78-cd9d-4202-9463-f7a4c5cc9092\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.346730 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.347934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.357009 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fbmf6" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.362842 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370158 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qgk\" (UniqueName: \"kubernetes.io/projected/f5b8ba2f-084a-4285-938b-5ffe669a9250-kube-api-access-j2qgk\") pod \"designate-operator-controller-manager-7d695c9b56-jdszs\" (UID: \"f5b8ba2f-084a-4285-938b-5ffe669a9250\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27p7\" (UniqueName: \"kubernetes.io/projected/d35343f5-188c-4787-9002-125c9e597e80-kube-api-access-z27p7\") pod \"heat-operator-controller-manager-774b86978c-6smrr\" (UID: \"d35343f5-188c-4787-9002-125c9e597e80\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370247 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qjzq\" (UniqueName: \"kubernetes.io/projected/db716c0e-bc96-4eaa-af75-184cd71e8124-kube-api-access-6qjzq\") pod \"glance-operator-controller-manager-68b95954c9-fxzrc\" (UID: \"db716c0e-bc96-4eaa-af75-184cd71e8124\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370276 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jnk\" (UniqueName: \"kubernetes.io/projected/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-kube-api-access-g8jnk\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmgb\" (UniqueName: \"kubernetes.io/projected/1a60eac6-e17c-4621-9367-3d1b60aab811-kube-api-access-8nmgb\") pod \"cinder-operator-controller-manager-79856dc55c-gnzjb\" (UID: \"1a60eac6-e17c-4621-9367-3d1b60aab811\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwk29\" (UniqueName: \"kubernetes.io/projected/8eff7b8e-21b1-4d9f-ac7b-bc44593394c1-kube-api-access-dwk29\") pod \"keystone-operator-controller-manager-748dc6576f-zsr4q\" (UID: \"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbq4r\" (UniqueName: \"kubernetes.io/projected/cdfcbb97-9f2e-40ab-863a-93e592ee728a-kube-api-access-mbq4r\") pod \"ironic-operator-controller-manager-58fc45656d-mlqr9\" (UID: \"cdfcbb97-9f2e-40ab-863a-93e592ee728a\") " pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.370416 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84s2\" (UniqueName: \"kubernetes.io/projected/f7e72195-5597-498f-906e-573b0c5c8295-kube-api-access-c84s2\") pod \"horizon-operator-controller-manager-68c9694994-jfk9g\" (UID: \"f7e72195-5597-498f-906e-573b0c5c8295\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.381417 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.391851 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.393148 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.397934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84s2\" (UniqueName: \"kubernetes.io/projected/f7e72195-5597-498f-906e-573b0c5c8295-kube-api-access-c84s2\") pod \"horizon-operator-controller-manager-68c9694994-jfk9g\" (UID: \"f7e72195-5597-498f-906e-573b0c5c8295\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.398748 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fjjfq" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.401316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.405146 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qjzq\" (UniqueName: \"kubernetes.io/projected/db716c0e-bc96-4eaa-af75-184cd71e8124-kube-api-access-6qjzq\") pod \"glance-operator-controller-manager-68b95954c9-fxzrc\" (UID: \"db716c0e-bc96-4eaa-af75-184cd71e8124\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.405781 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27p7\" (UniqueName: \"kubernetes.io/projected/d35343f5-188c-4787-9002-125c9e597e80-kube-api-access-z27p7\") pod \"heat-operator-controller-manager-774b86978c-6smrr\" (UID: \"d35343f5-188c-4787-9002-125c9e597e80\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.414132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmgb\" (UniqueName: \"kubernetes.io/projected/1a60eac6-e17c-4621-9367-3d1b60aab811-kube-api-access-8nmgb\") pod \"cinder-operator-controller-manager-79856dc55c-gnzjb\" (UID: \"1a60eac6-e17c-4621-9367-3d1b60aab811\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.424052 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.439575 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.442962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qgk\" (UniqueName: \"kubernetes.io/projected/f5b8ba2f-084a-4285-938b-5ffe669a9250-kube-api-access-j2qgk\") pod \"designate-operator-controller-manager-7d695c9b56-jdszs\" (UID: \"f5b8ba2f-084a-4285-938b-5ffe669a9250\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.447577 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.458463 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.459609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.464807 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xfqvf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.466609 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.471635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwk29\" (UniqueName: \"kubernetes.io/projected/8eff7b8e-21b1-4d9f-ac7b-bc44593394c1-kube-api-access-dwk29\") pod \"keystone-operator-controller-manager-748dc6576f-zsr4q\" (UID: \"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.472110 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.472157 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbq4r\" (UniqueName: \"kubernetes.io/projected/cdfcbb97-9f2e-40ab-863a-93e592ee728a-kube-api-access-mbq4r\") pod \"ironic-operator-controller-manager-58fc45656d-mlqr9\" (UID: \"cdfcbb97-9f2e-40ab-863a-93e592ee728a\") " pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.472243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jnk\" (UniqueName: \"kubernetes.io/projected/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-kube-api-access-g8jnk\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.477764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.481739 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.493888 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jnk\" (UniqueName: \"kubernetes.io/projected/e2835f06-b5ce-4170-a4c3-4a08e9cc2815-kube-api-access-g8jnk\") pod \"infra-operator-controller-manager-d5cc86f4b-d9crw\" (UID: \"e2835f06-b5ce-4170-a4c3-4a08e9cc2815\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.495221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwk29\" (UniqueName: \"kubernetes.io/projected/8eff7b8e-21b1-4d9f-ac7b-bc44593394c1-kube-api-access-dwk29\") pod \"keystone-operator-controller-manager-748dc6576f-zsr4q\" (UID: \"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.497254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbq4r\" (UniqueName: \"kubernetes.io/projected/cdfcbb97-9f2e-40ab-863a-93e592ee728a-kube-api-access-mbq4r\") pod \"ironic-operator-controller-manager-58fc45656d-mlqr9\" (UID: \"cdfcbb97-9f2e-40ab-863a-93e592ee728a\") " pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.502493 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.504672 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.510533 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kkfwd" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.511735 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.512793 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.514324 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6sl5b" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.516042 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.534378 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.538115 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.544998 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xz5kd" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.545560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.569438 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.573297 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97fn\" (UniqueName: \"kubernetes.io/projected/a718e502-d0e6-45ee-8a65-88de1381da04-kube-api-access-t97fn\") pod \"manila-operator-controller-manager-58bb8d67cc-dsjtl\" (UID: \"a718e502-d0e6-45ee-8a65-88de1381da04\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.573819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7h6\" (UniqueName: \"kubernetes.io/projected/a8b9e845-7f76-4609-aef9-89d1a16c971b-kube-api-access-2w7h6\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xv4wf\" (UID: \"a8b9e845-7f76-4609-aef9-89d1a16c971b\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.587650 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.593449 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.601422 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.633412 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.640052 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.643106 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wj4qc" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.643387 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.650280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.666218 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.672038 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97fn\" (UniqueName: \"kubernetes.io/projected/a718e502-d0e6-45ee-8a65-88de1381da04-kube-api-access-t97fn\") pod \"manila-operator-controller-manager-58bb8d67cc-dsjtl\" (UID: \"a718e502-d0e6-45ee-8a65-88de1381da04\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfkn\" (UniqueName: \"kubernetes.io/projected/2f3138aa-0515-46f5-b897-191356f55fa4-kube-api-access-hkfkn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-9x7r8\" (UID: \"2f3138aa-0515-46f5-b897-191356f55fa4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnwgj\" (UniqueName: \"kubernetes.io/projected/8badbdc1-a611-4ada-821a-daade496a649-kube-api-access-mnwgj\") pod \"octavia-operator-controller-manager-fd75fd47d-6nh25\" (UID: \"8badbdc1-a611-4ada-821a-daade496a649\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grzp6\" (UniqueName: \"kubernetes.io/projected/18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d-kube-api-access-grzp6\") pod \"nova-operator-controller-manager-79556f57fc-9sgvb\" (UID: \"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675706 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7h6\" (UniqueName: \"kubernetes.io/projected/a8b9e845-7f76-4609-aef9-89d1a16c971b-kube-api-access-2w7h6\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xv4wf\" (UID: \"a8b9e845-7f76-4609-aef9-89d1a16c971b\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675846 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.675483 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.678857 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.679812 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.697008 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8t5bh" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.697401 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4bmnf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.708406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.752058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97fn\" (UniqueName: \"kubernetes.io/projected/a718e502-d0e6-45ee-8a65-88de1381da04-kube-api-access-t97fn\") pod \"manila-operator-controller-manager-58bb8d67cc-dsjtl\" (UID: \"a718e502-d0e6-45ee-8a65-88de1381da04\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.752274 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.781033 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.782822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-str8b\" (UniqueName: \"kubernetes.io/projected/f7c09f33-05d7-4251-930c-43d381f7f662-kube-api-access-str8b\") pod \"ovn-operator-controller-manager-66cf5c67ff-v2hfk\" (UID: \"f7c09f33-05d7-4251-930c-43d381f7f662\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.782863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.782909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfkn\" (UniqueName: \"kubernetes.io/projected/2f3138aa-0515-46f5-b897-191356f55fa4-kube-api-access-hkfkn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-9x7r8\" (UID: \"2f3138aa-0515-46f5-b897-191356f55fa4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.782976 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcpf\" (UniqueName: \"kubernetes.io/projected/2020ac4a-5a4a-4c38-b667-5432dbf3d891-kube-api-access-hqcpf\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.783011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnwgj\" (UniqueName: \"kubernetes.io/projected/8badbdc1-a611-4ada-821a-daade496a649-kube-api-access-mnwgj\") pod \"octavia-operator-controller-manager-fd75fd47d-6nh25\" (UID: \"8badbdc1-a611-4ada-821a-daade496a649\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.783086 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4625\" (UniqueName: \"kubernetes.io/projected/e2f173d4-03f8-44b0-b05f-3dfd845569e8-kube-api-access-r4625\") pod \"placement-operator-controller-manager-5db546f9d9-qnqvs\" (UID: \"e2f173d4-03f8-44b0-b05f-3dfd845569e8\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.783120 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grzp6\" (UniqueName: \"kubernetes.io/projected/18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d-kube-api-access-grzp6\") pod \"nova-operator-controller-manager-79556f57fc-9sgvb\" (UID: \"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.786496 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7h6\" (UniqueName: \"kubernetes.io/projected/a8b9e845-7f76-4609-aef9-89d1a16c971b-kube-api-access-2w7h6\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xv4wf\" (UID: \"a8b9e845-7f76-4609-aef9-89d1a16c971b\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.787984 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.796892 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.798005 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.829700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.831293 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zv7tg" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.848757 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnwgj\" (UniqueName: \"kubernetes.io/projected/8badbdc1-a611-4ada-821a-daade496a649-kube-api-access-mnwgj\") pod \"octavia-operator-controller-manager-fd75fd47d-6nh25\" (UID: \"8badbdc1-a611-4ada-821a-daade496a649\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.852432 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.853092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grzp6\" (UniqueName: \"kubernetes.io/projected/18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d-kube-api-access-grzp6\") pod \"nova-operator-controller-manager-79556f57fc-9sgvb\" (UID: \"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.855881 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfkn\" (UniqueName: \"kubernetes.io/projected/2f3138aa-0515-46f5-b897-191356f55fa4-kube-api-access-hkfkn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-9x7r8\" (UID: \"2f3138aa-0515-46f5-b897-191356f55fa4\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.881678 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.884104 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-str8b\" (UniqueName: \"kubernetes.io/projected/f7c09f33-05d7-4251-930c-43d381f7f662-kube-api-access-str8b\") pod \"ovn-operator-controller-manager-66cf5c67ff-v2hfk\" (UID: \"f7c09f33-05d7-4251-930c-43d381f7f662\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.884146 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.884189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcpf\" (UniqueName: \"kubernetes.io/projected/2020ac4a-5a4a-4c38-b667-5432dbf3d891-kube-api-access-hqcpf\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.884231 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4625\" (UniqueName: \"kubernetes.io/projected/e2f173d4-03f8-44b0-b05f-3dfd845569e8-kube-api-access-r4625\") pod \"placement-operator-controller-manager-5db546f9d9-qnqvs\" (UID: \"e2f173d4-03f8-44b0-b05f-3dfd845569e8\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:16 crc kubenswrapper[4768]: E1124 17:05:16.884702 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 17:05:16 crc kubenswrapper[4768]: E1124 17:05:16.884743 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert podName:2020ac4a-5a4a-4c38-b667-5432dbf3d891 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:17.384730228 +0000 UTC m=+798.631698886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" (UID: "2020ac4a-5a4a-4c38-b667-5432dbf3d891") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.885252 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.895120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.921940 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.922106 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.942692 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l7cns" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.947401 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-gtc95"] Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.948753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4625\" (UniqueName: \"kubernetes.io/projected/e2f173d4-03f8-44b0-b05f-3dfd845569e8-kube-api-access-r4625\") pod \"placement-operator-controller-manager-5db546f9d9-qnqvs\" (UID: \"e2f173d4-03f8-44b0-b05f-3dfd845569e8\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.948852 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.950827 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-scvgw" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.954447 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-str8b\" (UniqueName: \"kubernetes.io/projected/f7c09f33-05d7-4251-930c-43d381f7f662-kube-api-access-str8b\") pod \"ovn-operator-controller-manager-66cf5c67ff-v2hfk\" (UID: \"f7c09f33-05d7-4251-930c-43d381f7f662\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.958449 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcpf\" (UniqueName: \"kubernetes.io/projected/2020ac4a-5a4a-4c38-b667-5432dbf3d891-kube-api-access-hqcpf\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.986284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6fn\" (UniqueName: \"kubernetes.io/projected/27ed9b45-b076-4104-a661-bc231021ae5b-kube-api-access-kx6fn\") pod \"swift-operator-controller-manager-6fdc4fcf86-x24f2\" (UID: \"27ed9b45-b076-4104-a661-bc231021ae5b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:16 crc kubenswrapper[4768]: I1124 17:05:16.992668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-gtc95"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.052409 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b2r7j"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.053718 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.056782 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f8684" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.059683 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b2r7j"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.068569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.100238 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99sz\" (UniqueName: \"kubernetes.io/projected/d40b5804-6340-4be6-8da4-dca19827c8ee-kube-api-access-r99sz\") pod \"test-operator-controller-manager-5cb74df96-gtc95\" (UID: \"d40b5804-6340-4be6-8da4-dca19827c8ee\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.100393 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb27r\" (UniqueName: \"kubernetes.io/projected/5b5647ed-7d14-4366-af99-d6d48ec2f033-kube-api-access-gb27r\") pod \"telemetry-operator-controller-manager-567f98c9d-hvvsp\" (UID: \"5b5647ed-7d14-4366-af99-d6d48ec2f033\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.100512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx6fn\" (UniqueName: \"kubernetes.io/projected/27ed9b45-b076-4104-a661-bc231021ae5b-kube-api-access-kx6fn\") pod \"swift-operator-controller-manager-6fdc4fcf86-x24f2\" (UID: \"27ed9b45-b076-4104-a661-bc231021ae5b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.133441 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx6fn\" (UniqueName: \"kubernetes.io/projected/27ed9b45-b076-4104-a661-bc231021ae5b-kube-api-access-kx6fn\") pod \"swift-operator-controller-manager-6fdc4fcf86-x24f2\" (UID: \"27ed9b45-b076-4104-a661-bc231021ae5b\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.152389 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.152844 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.168927 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.169921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.181895 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.182115 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-h4q2x" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.182714 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.190493 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.204666 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb27r\" (UniqueName: \"kubernetes.io/projected/5b5647ed-7d14-4366-af99-d6d48ec2f033-kube-api-access-gb27r\") pod \"telemetry-operator-controller-manager-567f98c9d-hvvsp\" (UID: \"5b5647ed-7d14-4366-af99-d6d48ec2f033\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.204759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r99sz\" (UniqueName: \"kubernetes.io/projected/d40b5804-6340-4be6-8da4-dca19827c8ee-kube-api-access-r99sz\") pod \"test-operator-controller-manager-5cb74df96-gtc95\" (UID: \"d40b5804-6340-4be6-8da4-dca19827c8ee\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.204784 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2gm\" (UniqueName: \"kubernetes.io/projected/f5471d19-b623-4aa2-9a14-56d05fe236f8-kube-api-access-hn2gm\") pod \"watcher-operator-controller-manager-864885998-b2r7j\" (UID: \"f5471d19-b623-4aa2-9a14-56d05fe236f8\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.209866 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.224626 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r99sz\" (UniqueName: \"kubernetes.io/projected/d40b5804-6340-4be6-8da4-dca19827c8ee-kube-api-access-r99sz\") pod \"test-operator-controller-manager-5cb74df96-gtc95\" (UID: \"d40b5804-6340-4be6-8da4-dca19827c8ee\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.230829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb27r\" (UniqueName: \"kubernetes.io/projected/5b5647ed-7d14-4366-af99-d6d48ec2f033-kube-api-access-gb27r\") pod \"telemetry-operator-controller-manager-567f98c9d-hvvsp\" (UID: \"5b5647ed-7d14-4366-af99-d6d48ec2f033\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.240970 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.301414 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.302280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.308752 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tzffw" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.309124 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.309160 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2gm\" (UniqueName: \"kubernetes.io/projected/f5471d19-b623-4aa2-9a14-56d05fe236f8-kube-api-access-hn2gm\") pod \"watcher-operator-controller-manager-864885998-b2r7j\" (UID: \"f5471d19-b623-4aa2-9a14-56d05fe236f8\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.309210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.309229 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd2dv\" (UniqueName: \"kubernetes.io/projected/920f3653-2dc6-4999-81c4-05248ca44d07-kube-api-access-xd2dv\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.310910 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.316099 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" event={"ID":"61f1ba78-cd9d-4202-9463-f7a4c5cc9092","Type":"ContainerStarted","Data":"f39126e93788c2fe8e48d2d804bd11b66768876aa860fb87ee219d243f18c2ed"} Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.318747 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.326950 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.327935 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2gm\" (UniqueName: \"kubernetes.io/projected/f5471d19-b623-4aa2-9a14-56d05fe236f8-kube-api-access-hn2gm\") pod \"watcher-operator-controller-manager-864885998-b2r7j\" (UID: \"f5471d19-b623-4aa2-9a14-56d05fe236f8\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.342801 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.399166 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.410769 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.410826 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:17.910810272 +0000 UTC m=+799.157778930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "webhook-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.410568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.412161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd2dv\" (UniqueName: \"kubernetes.io/projected/920f3653-2dc6-4999-81c4-05248ca44d07-kube-api-access-xd2dv\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.412211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.412233 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794xt\" (UniqueName: \"kubernetes.io/projected/9657d373-da37-4ca2-b8fe-7827bc37706f-kube-api-access-794xt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ttgkz\" (UID: \"9657d373-da37-4ca2-b8fe-7827bc37706f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.412271 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.413426 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.413514 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert podName:2020ac4a-5a4a-4c38-b667-5432dbf3d891 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:18.413502587 +0000 UTC m=+799.660471245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" (UID: "2020ac4a-5a4a-4c38-b667-5432dbf3d891") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.415206 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.415256 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:17.915247056 +0000 UTC m=+799.162215714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "metrics-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.444144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.465415 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd2dv\" (UniqueName: \"kubernetes.io/projected/920f3653-2dc6-4999-81c4-05248ca44d07-kube-api-access-xd2dv\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.478435 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.513804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-794xt\" (UniqueName: \"kubernetes.io/projected/9657d373-da37-4ca2-b8fe-7827bc37706f-kube-api-access-794xt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ttgkz\" (UID: \"9657d373-da37-4ca2-b8fe-7827bc37706f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.540252 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-794xt\" (UniqueName: \"kubernetes.io/projected/9657d373-da37-4ca2-b8fe-7827bc37706f-kube-api-access-794xt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-ttgkz\" (UID: \"9657d373-da37-4ca2-b8fe-7827bc37706f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.582563 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-6smrr"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.639525 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.655480 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.671830 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.685008 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.888656 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb"] Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.918916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: I1124 17:05:17.919030 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.919148 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.919205 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:18.919186323 +0000 UTC m=+800.166154981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "metrics-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.919256 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 17:05:17 crc kubenswrapper[4768]: E1124 17:05:17.919283 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:18.919273436 +0000 UTC m=+800.166242094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "webhook-server-cert" not found Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.051362 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.062324 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.118148 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g"] Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.129370 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e72195_5597_498f_906e_573b0c5c8295.slice/crio-94c2ca2aeb231be2d1b8f3527406dc3239061ab5c9eb2b2ac5d119c14d20aca3 WatchSource:0}: Error finding container 94c2ca2aeb231be2d1b8f3527406dc3239061ab5c9eb2b2ac5d119c14d20aca3: Status 404 returned error can't find the container with id 94c2ca2aeb231be2d1b8f3527406dc3239061ab5c9eb2b2ac5d119c14d20aca3 Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.141813 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q"] Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.167212 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eff7b8e_21b1_4d9f_ac7b_bc44593394c1.slice/crio-a93eb0e24a490e797c13493321133b6fa905dda3fc4b0a375f5c15ea62139d76 WatchSource:0}: Error finding container a93eb0e24a490e797c13493321133b6fa905dda3fc4b0a375f5c15ea62139d76: Status 404 returned error can't find the container with id a93eb0e24a490e797c13493321133b6fa905dda3fc4b0a375f5c15ea62139d76 Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.172108 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.177198 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.182116 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk"] Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.188533 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ed9b45_b076_4104_a661_bc231021ae5b.slice/crio-40b4007011b6fc80fce274bf89ce700205a493fc6c5e627e507ff1935f8e2756 WatchSource:0}: Error finding container 40b4007011b6fc80fce274bf89ce700205a493fc6c5e627e507ff1935f8e2756: Status 404 returned error can't find the container with id 40b4007011b6fc80fce274bf89ce700205a493fc6c5e627e507ff1935f8e2756 Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.189965 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c09f33_05d7_4251_930c_43d381f7f662.slice/crio-bdc57d6f65375cb2b3332f3344564528c38b434b7cf1a3f11a4d53b14032c5f6 WatchSource:0}: Error finding container bdc57d6f65375cb2b3332f3344564528c38b434b7cf1a3f11a4d53b14032c5f6: Status 404 returned error can't find the container with id bdc57d6f65375cb2b3332f3344564528c38b434b7cf1a3f11a4d53b14032c5f6 Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.191103 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8"] Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.195436 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-str8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-v2hfk_openstack-operators(f7c09f33-05d7-4251-930c-43d381f7f662): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.198026 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-str8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-v2hfk_openstack-operators(f7c09f33-05d7-4251-930c-43d381f7f662): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.199636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" podUID="f7c09f33-05d7-4251-930c-43d381f7f662" Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.202885 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3138aa_0515_46f5_b897_191356f55fa4.slice/crio-5ba86d91180ea8fecd9ce9a337c3f73b342883409841f09470d09a71ea07e0ef WatchSource:0}: Error finding container 5ba86d91180ea8fecd9ce9a337c3f73b342883409841f09470d09a71ea07e0ef: Status 404 returned error can't find the container with id 5ba86d91180ea8fecd9ce9a337c3f73b342883409841f09470d09a71ea07e0ef Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.207263 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkfkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-9x7r8_openstack-operators(2f3138aa-0515-46f5-b897-191356f55fa4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.215115 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkfkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-9x7r8_openstack-operators(2f3138aa-0515-46f5-b897-191356f55fa4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.216279 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" podUID="2f3138aa-0515-46f5-b897-191356f55fa4" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.329736 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" event={"ID":"cdfcbb97-9f2e-40ab-863a-93e592ee728a","Type":"ContainerStarted","Data":"b856359e40c6d2d3e60c6542b5ef6f7dc437b7228812fff943d8f08e541585be"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.330769 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" event={"ID":"2f3138aa-0515-46f5-b897-191356f55fa4","Type":"ContainerStarted","Data":"5ba86d91180ea8fecd9ce9a337c3f73b342883409841f09470d09a71ea07e0ef"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.332159 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.336807 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-gtc95"] Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.341964 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" podUID="2f3138aa-0515-46f5-b897-191356f55fa4" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.344879 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" event={"ID":"1a60eac6-e17c-4621-9367-3d1b60aab811","Type":"ContainerStarted","Data":"90361c6d2b727d9fda526f85e6aa4ceb722b704cb8f705f2e414d2cb7524e45e"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.351613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" event={"ID":"f5b8ba2f-084a-4285-938b-5ffe669a9250","Type":"ContainerStarted","Data":"71ffe835f8e1aa99d468b8552cf9214a6b8f24ea48a93757fc4016b078fe84c5"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.357501 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b2r7j"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.366600 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" event={"ID":"e2f173d4-03f8-44b0-b05f-3dfd845569e8","Type":"ContainerStarted","Data":"74ba83e49edf53de377880225b207040b42168ed5ec60f5e0e1ee0335fd96b7b"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.367863 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" event={"ID":"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1","Type":"ContainerStarted","Data":"a93eb0e24a490e797c13493321133b6fa905dda3fc4b0a375f5c15ea62139d76"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.371572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" event={"ID":"d35343f5-188c-4787-9002-125c9e597e80","Type":"ContainerStarted","Data":"7ad84a40f3776574bf4d66ceaa0da5a32bb79371c7714749b67a243c76f1001c"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.377787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" event={"ID":"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d","Type":"ContainerStarted","Data":"f69d83e23d4c9d9b860e0c7ed3063dbc716c80e5001ff11e044511ebca2cb1cd"} Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.380209 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-794xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-ttgkz_openstack-operators(9657d373-da37-4ca2-b8fe-7827bc37706f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.380265 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hn2gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-b2r7j_openstack-operators(f5471d19-b623-4aa2-9a14-56d05fe236f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.380545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" event={"ID":"e2835f06-b5ce-4170-a4c3-4a08e9cc2815","Type":"ContainerStarted","Data":"19ecefbd27ae55344bce775335635617cd7351eb885576cf8134df80c010e28b"} Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.381636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" podUID="9657d373-da37-4ca2-b8fe-7827bc37706f" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.382267 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hn2gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-b2r7j_openstack-operators(f5471d19-b623-4aa2-9a14-56d05fe236f8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.382388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" event={"ID":"db716c0e-bc96-4eaa-af75-184cd71e8124","Type":"ContainerStarted","Data":"0a02e8c817e755f1fe3392fb54ae190f6db9b84033825d2a3d18693c87373816"} Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.383445 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" podUID="f5471d19-b623-4aa2-9a14-56d05fe236f8" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.383840 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp"] Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.385895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" event={"ID":"a8b9e845-7f76-4609-aef9-89d1a16c971b","Type":"ContainerStarted","Data":"13d2d093862dd7f89e85289bbe55b3a023f398406c012029a133c02ecdc735e8"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.387304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" event={"ID":"27ed9b45-b076-4104-a661-bc231021ae5b","Type":"ContainerStarted","Data":"40b4007011b6fc80fce274bf89ce700205a493fc6c5e627e507ff1935f8e2756"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.388740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" event={"ID":"f7c09f33-05d7-4251-930c-43d381f7f662","Type":"ContainerStarted","Data":"bdc57d6f65375cb2b3332f3344564528c38b434b7cf1a3f11a4d53b14032c5f6"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.390305 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" event={"ID":"a718e502-d0e6-45ee-8a65-88de1381da04","Type":"ContainerStarted","Data":"40e47550d7213d9b03609dd2d2523dcded99464b0e1adfbe403ee1b2e862c028"} Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.390805 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" podUID="f7c09f33-05d7-4251-930c-43d381f7f662" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.391808 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" event={"ID":"8badbdc1-a611-4ada-821a-daade496a649","Type":"ContainerStarted","Data":"4ca07fc4a9507d39c67ddf587bf767917ce582b62269b58e0a6ddb483cfa0b26"} Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.392609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" event={"ID":"f7e72195-5597-498f-906e-573b0c5c8295","Type":"ContainerStarted","Data":"94c2ca2aeb231be2d1b8f3527406dc3239061ab5c9eb2b2ac5d119c14d20aca3"} Nov 24 17:05:18 crc kubenswrapper[4768]: W1124 17:05:18.395239 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5647ed_7d14_4366_af99_d6d48ec2f033.slice/crio-b48e16480602645c94da7024423d1569a733a318a832112895716efb005ed2ee WatchSource:0}: Error finding container b48e16480602645c94da7024423d1569a733a318a832112895716efb005ed2ee: Status 404 returned error can't find the container with id b48e16480602645c94da7024423d1569a733a318a832112895716efb005ed2ee Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.395732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerStarted","Data":"e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641"} Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.398785 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gb27r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-hvvsp_openstack-operators(5b5647ed-7d14-4366-af99-d6d48ec2f033): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.400805 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gb27r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-hvvsp_openstack-operators(5b5647ed-7d14-4366-af99-d6d48ec2f033): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.401983 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" podUID="5b5647ed-7d14-4366-af99-d6d48ec2f033" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.424038 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twkb6" podStartSLOduration=3.275020295 podStartE2EDuration="5.424019516s" podCreationTimestamp="2025-11-24 17:05:13 +0000 UTC" firstStartedPulling="2025-11-24 17:05:15.253415428 +0000 UTC m=+796.500384086" lastFinishedPulling="2025-11-24 17:05:17.402414649 +0000 UTC m=+798.649383307" observedRunningTime="2025-11-24 17:05:18.422328859 +0000 UTC m=+799.669297517" watchObservedRunningTime="2025-11-24 17:05:18.424019516 +0000 UTC m=+799.670988174" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.430956 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.436618 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2020ac4a-5a4a-4c38-b667-5432dbf3d891-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g\" (UID: \"2020ac4a-5a4a-4c38-b667-5432dbf3d891\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.498229 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.944653 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.945008 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.946086 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:20.946065876 +0000 UTC m=+802.193034534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "metrics-server-cert" not found Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.949864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.950729 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 17:05:18 crc kubenswrapper[4768]: E1124 17:05:18.950807 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs podName:920f3653-2dc6-4999-81c4-05248ca44d07 nodeName:}" failed. No retries permitted until 2025-11-24 17:05:20.950787478 +0000 UTC m=+802.197756136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs") pod "openstack-operator-controller-manager-56fcd5b457-nhnr6" (UID: "920f3653-2dc6-4999-81c4-05248ca44d07") : secret "webhook-server-cert" not found Nov 24 17:05:18 crc kubenswrapper[4768]: I1124 17:05:18.966426 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g"] Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.286867 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.288263 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.297297 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.411117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" event={"ID":"f5471d19-b623-4aa2-9a14-56d05fe236f8","Type":"ContainerStarted","Data":"df4342bdae5088782388c349cf49c0979e5feaa223e48d0652b7630173429b3c"} Nov 24 17:05:19 crc kubenswrapper[4768]: E1124 17:05:19.436579 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" podUID="f5471d19-b623-4aa2-9a14-56d05fe236f8" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.437608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" event={"ID":"9657d373-da37-4ca2-b8fe-7827bc37706f","Type":"ContainerStarted","Data":"b26e0a56c1a9e8a78c9a616daccda7fa453244ab561fabaf8c99891e9d56d479"} Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.442196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" event={"ID":"d40b5804-6340-4be6-8da4-dca19827c8ee","Type":"ContainerStarted","Data":"337e499dfa56d431954d9198738ebd2343beb427b43866674c1fe0c0d19c747d"} Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.443435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" event={"ID":"2020ac4a-5a4a-4c38-b667-5432dbf3d891","Type":"ContainerStarted","Data":"e8493bfb50bf154047e6ae2a123bb5d6e7af07846f312f7ce894227f8ca293b4"} Nov 24 17:05:19 crc kubenswrapper[4768]: E1124 17:05:19.444200 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" podUID="9657d373-da37-4ca2-b8fe-7827bc37706f" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.446796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" event={"ID":"5b5647ed-7d14-4366-af99-d6d48ec2f033","Type":"ContainerStarted","Data":"b48e16480602645c94da7024423d1569a733a318a832112895716efb005ed2ee"} Nov 24 17:05:19 crc kubenswrapper[4768]: E1124 17:05:19.451232 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" podUID="2f3138aa-0515-46f5-b897-191356f55fa4" Nov 24 17:05:19 crc kubenswrapper[4768]: E1124 17:05:19.452518 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" podUID="5b5647ed-7d14-4366-af99-d6d48ec2f033" Nov 24 17:05:19 crc kubenswrapper[4768]: E1124 17:05:19.453071 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" podUID="f7c09f33-05d7-4251-930c-43d381f7f662" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.463185 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.463269 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.463366 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zq7v\" (UniqueName: \"kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.564416 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zq7v\" (UniqueName: \"kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.564502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.564577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.565559 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.574431 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.586302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zq7v\" (UniqueName: \"kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v\") pod \"redhat-operators-cq696\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:19 crc kubenswrapper[4768]: I1124 17:05:19.619959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:20 crc kubenswrapper[4768]: E1124 17:05:20.465149 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" podUID="9657d373-da37-4ca2-b8fe-7827bc37706f" Nov 24 17:05:20 crc kubenswrapper[4768]: E1124 17:05:20.469955 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" podUID="5b5647ed-7d14-4366-af99-d6d48ec2f033" Nov 24 17:05:20 crc kubenswrapper[4768]: E1124 17:05:20.470165 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" podUID="f5471d19-b623-4aa2-9a14-56d05fe236f8" Nov 24 17:05:21 crc kubenswrapper[4768]: I1124 17:05:21.022189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:21 crc kubenswrapper[4768]: I1124 17:05:21.022291 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:21 crc kubenswrapper[4768]: I1124 17:05:21.028793 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-webhook-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:21 crc kubenswrapper[4768]: I1124 17:05:21.043224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/920f3653-2dc6-4999-81c4-05248ca44d07-metrics-certs\") pod \"openstack-operator-controller-manager-56fcd5b457-nhnr6\" (UID: \"920f3653-2dc6-4999-81c4-05248ca44d07\") " pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:21 crc kubenswrapper[4768]: I1124 17:05:21.111085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.262340 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.263006 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.329267 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.485236 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.491797 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.510839 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.568041 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.676195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.676250 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.676326 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwhq\" (UniqueName: \"kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.778082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.778137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.778178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rwhq\" (UniqueName: \"kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.778655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.778656 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.805378 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rwhq\" (UniqueName: \"kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq\") pod \"certified-operators-kgl4t\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:24 crc kubenswrapper[4768]: I1124 17:05:24.825683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:26 crc kubenswrapper[4768]: I1124 17:05:26.680404 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:26 crc kubenswrapper[4768]: I1124 17:05:26.680926 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twkb6" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="registry-server" containerID="cri-o://e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" gracePeriod=2 Nov 24 17:05:27 crc kubenswrapper[4768]: I1124 17:05:27.521571 4768 generic.go:334] "Generic (PLEG): container finished" podID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerID="e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" exitCode=0 Nov 24 17:05:27 crc kubenswrapper[4768]: I1124 17:05:27.521653 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerDied","Data":"e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641"} Nov 24 17:05:31 crc kubenswrapper[4768]: E1124 17:05:31.439839 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9" Nov 24 17:05:31 crc kubenswrapper[4768]: E1124 17:05:31.440427 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nmgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-79856dc55c-gnzjb_openstack-operators(1a60eac6-e17c-4621-9367-3d1b60aab811): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:31 crc kubenswrapper[4768]: E1124 17:05:31.881812 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 24 17:05:31 crc kubenswrapper[4768]: E1124 17:05:31.882322 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c84s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-jfk9g_openstack-operators(f7e72195-5597-498f-906e-573b0c5c8295): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:32 crc kubenswrapper[4768]: E1124 17:05:32.334310 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 24 17:05:32 crc kubenswrapper[4768]: E1124 17:05:32.334542 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grzp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-9sgvb_openstack-operators(18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:32 crc kubenswrapper[4768]: E1124 17:05:32.726710 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 24 17:05:32 crc kubenswrapper[4768]: E1124 17:05:32.726918 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwk29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-zsr4q_openstack-operators(8eff7b8e-21b1-4d9f-ac7b-bc44593394c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:33 crc kubenswrapper[4768]: E1124 17:05:33.254481 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 24 17:05:33 crc kubenswrapper[4768]: E1124 17:05:33.254761 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g8jnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-d9crw_openstack-operators(e2835f06-b5ce-4170-a4c3-4a08e9cc2815): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:33 crc kubenswrapper[4768]: E1124 17:05:33.616302 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 24 17:05:33 crc kubenswrapper[4768]: E1124 17:05:33.616499 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r99sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-gtc95_openstack-operators(d40b5804-6340-4be6-8da4-dca19827c8ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.033333 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.033530 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kx6fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-x24f2_openstack-operators(27ed9b45-b076-4104-a661-bc231021ae5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.262226 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641 is running failed: container process not found" containerID="e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.262760 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641 is running failed: container process not found" containerID="e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.263021 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641 is running failed: container process not found" containerID="e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" cmd=["grpc_health_probe","-addr=:50051"] Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.263078 4768 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-twkb6" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="registry-server" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.452632 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.452849 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4625,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-qnqvs_openstack-operators(e2f173d4-03f8-44b0-b05f-3dfd845569e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.858304 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 24 17:05:34 crc kubenswrapper[4768]: E1124 17:05:34.858510 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2qgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-7d695c9b56-jdszs_openstack-operators(f5b8ba2f-084a-4285-938b-5ffe669a9250): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:34 crc kubenswrapper[4768]: I1124 17:05:34.892546 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:05:34 crc kubenswrapper[4768]: I1124 17:05:34.893020 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:05:35 crc kubenswrapper[4768]: E1124 17:05:35.452627 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/ironic-operator:bd29897380326806c801a45eb708db4292a017c0" Nov 24 17:05:35 crc kubenswrapper[4768]: E1124 17:05:35.452685 4768 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/openstack-k8s-operators/ironic-operator:bd29897380326806c801a45eb708db4292a017c0" Nov 24 17:05:35 crc kubenswrapper[4768]: E1124 17:05:35.452829 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.18:5001/openstack-k8s-operators/ironic-operator:bd29897380326806c801a45eb708db4292a017c0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mbq4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-58fc45656d-mlqr9_openstack-operators(cdfcbb97-9f2e-40ab-863a-93e592ee728a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.241072 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.269540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content\") pod \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.269720 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwm4\" (UniqueName: \"kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4\") pod \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.269785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities\") pod \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\" (UID: \"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6\") " Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.271713 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities" (OuterVolumeSpecName: "utilities") pod "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" (UID: "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.280557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4" (OuterVolumeSpecName: "kube-api-access-hrwm4") pod "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" (UID: "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6"). InnerVolumeSpecName "kube-api-access-hrwm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.294174 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" (UID: "9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.370895 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrwm4\" (UniqueName: \"kubernetes.io/projected/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-kube-api-access-hrwm4\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.370929 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.370940 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.584494 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twkb6" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.590781 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twkb6" event={"ID":"9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6","Type":"ContainerDied","Data":"76e454732354ddbeb81a19b5716d56513cc69716303fcd71894e2f5aafc5dfc1"} Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.590833 4768 scope.go:117] "RemoveContainer" containerID="e64472ff17d8caeab8fc9889305d0cae85b44ddf62aabbd9a5a530d4daa07641" Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.615559 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:37 crc kubenswrapper[4768]: I1124 17:05:37.621189 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twkb6"] Nov 24 17:05:38 crc kubenswrapper[4768]: I1124 17:05:38.551552 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:05:39 crc kubenswrapper[4768]: I1124 17:05:39.519048 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6"] Nov 24 17:05:39 crc kubenswrapper[4768]: I1124 17:05:39.588154 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" path="/var/lib/kubelet/pods/9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6/volumes" Nov 24 17:05:39 crc kubenswrapper[4768]: I1124 17:05:39.597549 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerStarted","Data":"e44d05b994c6c8870acfe64510287d201feab1a8984a3986cdf944f2acd010e0"} Nov 24 17:05:39 crc kubenswrapper[4768]: I1124 17:05:39.964437 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:40 crc kubenswrapper[4768]: W1124 17:05:40.081993 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4a92cd5_0847_4380_8530_9c9892f7b443.slice/crio-f348be5d5c093f440ddc238018a66ee2881679b93fbe06bc25fa50d2f198fddb WatchSource:0}: Error finding container f348be5d5c093f440ddc238018a66ee2881679b93fbe06bc25fa50d2f198fddb: Status 404 returned error can't find the container with id f348be5d5c093f440ddc238018a66ee2881679b93fbe06bc25fa50d2f198fddb Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.090478 4768 scope.go:117] "RemoveContainer" containerID="27bee47c48112b79835448a0cecdd744ec60239818a2e3acc1a6118ec74146c4" Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.214440 4768 scope.go:117] "RemoveContainer" containerID="52a38bf2527a01c3da924a907304483532c3d0cf9945844731ac744ee2bc9080" Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.608144 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" event={"ID":"db716c0e-bc96-4eaa-af75-184cd71e8124","Type":"ContainerStarted","Data":"fb888e572f2831d5b35ccdbc5685856a3b676c774f23299b285c7cbe67b354a6"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.612437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" event={"ID":"d35343f5-188c-4787-9002-125c9e597e80","Type":"ContainerStarted","Data":"1a5870cf95e14dc20fd63cc944bb34ec5063fa6cf5251af6bade6a75cbd91c33"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.614196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" event={"ID":"a718e502-d0e6-45ee-8a65-88de1381da04","Type":"ContainerStarted","Data":"fde9efa38dbd61d8c5665a694aef4f85f5fbbe0cc3bdd9d22283c287dab02434"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.627092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" event={"ID":"8badbdc1-a611-4ada-821a-daade496a649","Type":"ContainerStarted","Data":"c0380af58cad025fbb37cdabb6385751b0af439cb1e7ac53e8388445ecff9f5a"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.628747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerStarted","Data":"f348be5d5c093f440ddc238018a66ee2881679b93fbe06bc25fa50d2f198fddb"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.630624 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" event={"ID":"920f3653-2dc6-4999-81c4-05248ca44d07","Type":"ContainerStarted","Data":"d416780c3948ca2eeba6a5cf0774ea1b5e69c818a0c5ebc80819c7099bbf09aa"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.640665 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" event={"ID":"a8b9e845-7f76-4609-aef9-89d1a16c971b","Type":"ContainerStarted","Data":"aed9ba52d4df83910f267b4c13da0ccad07af9cb2302f19a616a4445d4645686"} Nov 24 17:05:40 crc kubenswrapper[4768]: I1124 17:05:40.647741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" event={"ID":"61f1ba78-cd9d-4202-9463-f7a4c5cc9092","Type":"ContainerStarted","Data":"cb616bef27be8c8c0573d4730ed8eb636c23ff63a293111a8609edf2745a88a6"} Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.658911 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerID="21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af" exitCode=0 Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.659016 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerDied","Data":"21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af"} Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.661912 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" event={"ID":"920f3653-2dc6-4999-81c4-05248ca44d07","Type":"ContainerStarted","Data":"04414a839df6a2c1311b797cd9ea01dbba83f367d15f36b5ea9f0c46f447a217"} Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.662033 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.664393 4768 generic.go:334] "Generic (PLEG): container finished" podID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerID="39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3" exitCode=0 Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.664438 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerDied","Data":"39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3"} Nov 24 17:05:41 crc kubenswrapper[4768]: I1124 17:05:41.702864 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" podStartSLOduration=25.702844027 podStartE2EDuration="25.702844027s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:05:41.697909949 +0000 UTC m=+822.944878607" watchObservedRunningTime="2025-11-24 17:05:41.702844027 +0000 UTC m=+822.949812675" Nov 24 17:05:41 crc kubenswrapper[4768]: E1124 17:05:41.929936 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" podUID="d40b5804-6340-4be6-8da4-dca19827c8ee" Nov 24 17:05:41 crc kubenswrapper[4768]: E1124 17:05:41.990664 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" podUID="18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.079274 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" podUID="f5b8ba2f-084a-4285-938b-5ffe669a9250" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.342377 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" podUID="e2835f06-b5ce-4170-a4c3-4a08e9cc2815" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.401960 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" podUID="1a60eac6-e17c-4621-9367-3d1b60aab811" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.582167 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" podUID="8eff7b8e-21b1-4d9f-ac7b-bc44593394c1" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.674617 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" event={"ID":"2f3138aa-0515-46f5-b897-191356f55fa4","Type":"ContainerStarted","Data":"d68b660b028743a9343b0bbfeb92b404d1ce1f67d27eff2fdf12b0097fe815b1"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.676135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" event={"ID":"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d","Type":"ContainerStarted","Data":"c4d2cda4211c6bd8096887a8b90f52183b45816273104fac9c7dfb77f444e1a8"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.677399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" event={"ID":"9657d373-da37-4ca2-b8fe-7827bc37706f","Type":"ContainerStarted","Data":"8ba5b4218f29b307b5af4af004af69aee292c9817b3e73d9f2670ecd39078451"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.680209 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" event={"ID":"2020ac4a-5a4a-4c38-b667-5432dbf3d891","Type":"ContainerStarted","Data":"a4adfcb7df6747d78eb6f3770bc6a4885d169aea369607d251c790d71baa3a37"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.681087 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" event={"ID":"5b5647ed-7d14-4366-af99-d6d48ec2f033","Type":"ContainerStarted","Data":"2d6597327ba5e71271e57989901c42c796bfc1728ef9e062a2fa3fd19f9b2290"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.684122 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" event={"ID":"f7c09f33-05d7-4251-930c-43d381f7f662","Type":"ContainerStarted","Data":"6d3df6890c3c9590e8e1aefd7322abdb572d769162be54432a676b8d416f8355"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.684152 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" event={"ID":"f7c09f33-05d7-4251-930c-43d381f7f662","Type":"ContainerStarted","Data":"ef93c7891cb7ad56b3fa194be618e63696a881c2cd740952f23a6b7fb03fe9ff"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.684361 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.685797 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" event={"ID":"f5471d19-b623-4aa2-9a14-56d05fe236f8","Type":"ContainerStarted","Data":"dd26769bdc843d6dd392a21bf295437778b1a6c08ca9e6b21b900998526df16f"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.687200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" event={"ID":"e2835f06-b5ce-4170-a4c3-4a08e9cc2815","Type":"ContainerStarted","Data":"68269605b28215035f796f08911ac634d92719a81e38e75a21bf643734ca5882"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.692972 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" event={"ID":"db716c0e-bc96-4eaa-af75-184cd71e8124","Type":"ContainerStarted","Data":"3bede97b2c145fde8d878f3e2fe5380ad024855c4e5b76e7e13a595e52f636f4"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.693552 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.705335 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" event={"ID":"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1","Type":"ContainerStarted","Data":"65596eaf7c8600fabc07ec76efb594020b2c5ade6cd5c8679d57f5dcaebff3d8"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.707749 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" event={"ID":"f5b8ba2f-084a-4285-938b-5ffe669a9250","Type":"ContainerStarted","Data":"b28f04790eaf46e5ac4af0e70474f8038152aecf44f2098f048ad8cf767c048f"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.711862 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" podStartSLOduration=2.277896115 podStartE2EDuration="26.711851503s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.511000803 +0000 UTC m=+798.757969461" lastFinishedPulling="2025-11-24 17:05:41.944956191 +0000 UTC m=+823.191924849" observedRunningTime="2025-11-24 17:05:42.711098132 +0000 UTC m=+823.958066790" watchObservedRunningTime="2025-11-24 17:05:42.711851503 +0000 UTC m=+823.958820161" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.718376 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" event={"ID":"1a60eac6-e17c-4621-9367-3d1b60aab811","Type":"ContainerStarted","Data":"71dd6a8c515f3a4a09d5de43e1747a73edc97df93bda60c7d7e98df63de2c396"} Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.731441 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" event={"ID":"d40b5804-6340-4be6-8da4-dca19827c8ee","Type":"ContainerStarted","Data":"2567b3d3e9e34ff363a3c9cfcd7b7c71c1d853cb353cb0f35f533f77f88a4b62"} Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.738498 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" podUID="f5b8ba2f-084a-4285-938b-5ffe669a9250" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.751257 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-ttgkz" podStartSLOduration=3.842312128 podStartE2EDuration="25.75123318s" podCreationTimestamp="2025-11-24 17:05:17 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.380096062 +0000 UTC m=+799.627064710" lastFinishedPulling="2025-11-24 17:05:40.289017104 +0000 UTC m=+821.535985762" observedRunningTime="2025-11-24 17:05:42.744112012 +0000 UTC m=+823.991080670" watchObservedRunningTime="2025-11-24 17:05:42.75123318 +0000 UTC m=+823.998201838" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.758669 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" podUID="e2835f06-b5ce-4170-a4c3-4a08e9cc2815" Nov 24 17:05:42 crc kubenswrapper[4768]: I1124 17:05:42.796139 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" podStartSLOduration=4.898934614 podStartE2EDuration="26.79611698s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.195268334 +0000 UTC m=+799.442236992" lastFinishedPulling="2025-11-24 17:05:40.09245071 +0000 UTC m=+821.339419358" observedRunningTime="2025-11-24 17:05:42.789024683 +0000 UTC m=+824.035993361" watchObservedRunningTime="2025-11-24 17:05:42.79611698 +0000 UTC m=+824.043085638" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.816324 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" podUID="d40b5804-6340-4be6-8da4-dca19827c8ee" Nov 24 17:05:42 crc kubenswrapper[4768]: E1124 17:05:42.993001 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" podUID="cdfcbb97-9f2e-40ab-863a-93e592ee728a" Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.141799 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" podUID="27ed9b45-b076-4104-a661-bc231021ae5b" Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.185747 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" podUID="f7e72195-5597-498f-906e-573b0c5c8295" Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.196132 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" podUID="e2f173d4-03f8-44b0-b05f-3dfd845569e8" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.739817 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" event={"ID":"2f3138aa-0515-46f5-b897-191356f55fa4","Type":"ContainerStarted","Data":"1a2357006b208399cfb65dac348d1db9b08ae54bec7a1b4bc6bf028e7ae0d77c"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.740573 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.746314 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" event={"ID":"1a60eac6-e17c-4621-9367-3d1b60aab811","Type":"ContainerStarted","Data":"f7ebe52f7f7953d169b824f03353b5c17d6f342e4786450cc17c2147a50d2693"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.746526 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.748386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" event={"ID":"a718e502-d0e6-45ee-8a65-88de1381da04","Type":"ContainerStarted","Data":"bc54bf1998b2eb52500ebc277ece7df62fdb66e7fb0768658070b377e6ba2dc2"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.748517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.750774 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerID="9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47" exitCode=0 Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.750834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerDied","Data":"9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.752759 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" event={"ID":"8badbdc1-a611-4ada-821a-daade496a649","Type":"ContainerStarted","Data":"b7fad54b5acd4dd2b9b20eb6a5f253756045d56f003b5a82d385ad9acd15b107"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.752955 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.757094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" event={"ID":"d35343f5-188c-4787-9002-125c9e597e80","Type":"ContainerStarted","Data":"2b0b04ba3fb722eef68fab806dca0b55331aa15640bb7dc7ba818b0613b6e01d"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.757232 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.759706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" event={"ID":"18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d","Type":"ContainerStarted","Data":"396334e6afa42dc039147ae09596860f7844aa0d99af77ddb99e13e0ed3fd512"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.759747 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.761507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" event={"ID":"27ed9b45-b076-4104-a661-bc231021ae5b","Type":"ContainerStarted","Data":"b6baf83bb8ee9d58ca76ca768c9c2dc8f3cf6656775b1a1c9c3cf4878300c7d0"} Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.762543 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" podUID="27ed9b45-b076-4104-a661-bc231021ae5b" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.764320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" event={"ID":"5b5647ed-7d14-4366-af99-d6d48ec2f033","Type":"ContainerStarted","Data":"0832556963e29906438b8f5b2ff6bfe4f7f0dd5aaf75a09fbfb70e4a9f42750b"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.764452 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.766010 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" event={"ID":"e2f173d4-03f8-44b0-b05f-3dfd845569e8","Type":"ContainerStarted","Data":"8080e47e9c3cc3d2c1aae6cb9391b2cf1a42fcbe8f54ebc51e94eac9ada168a5"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.767642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" event={"ID":"cdfcbb97-9f2e-40ab-863a-93e592ee728a","Type":"ContainerStarted","Data":"552671676cb5f7a6939a60c601092c23a3a9b7b633163c1f8330be0d37b90fd5"} Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.767642 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" podUID="e2f173d4-03f8-44b0-b05f-3dfd845569e8" Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.768731 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/ironic-operator:bd29897380326806c801a45eb708db4292a017c0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" podUID="cdfcbb97-9f2e-40ab-863a-93e592ee728a" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.769828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" event={"ID":"a8b9e845-7f76-4609-aef9-89d1a16c971b","Type":"ContainerStarted","Data":"e65536186001474f98b10fa539687f93241443018583643d9ab871a96ef80c09"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.769957 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.771667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" event={"ID":"61f1ba78-cd9d-4202-9463-f7a4c5cc9092","Type":"ContainerStarted","Data":"116f3b2a099f4bfa15c7f3389a710bf08f3dc3a63e51d1a40645c3ebfd1bf1d3"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.771741 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.773902 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" event={"ID":"8eff7b8e-21b1-4d9f-ac7b-bc44593394c1","Type":"ContainerStarted","Data":"b0b626fddccf2ed26c57adbff50a5a82fa43b3cdcaf78abc4122b395aa6b7311"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.774025 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.776157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" event={"ID":"f7e72195-5597-498f-906e-573b0c5c8295","Type":"ContainerStarted","Data":"0d0564f3ffede2b817723d87944d20aa1f2801ae300f4c4c59fb3a159aad6f6a"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.778302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" event={"ID":"f5471d19-b623-4aa2-9a14-56d05fe236f8","Type":"ContainerStarted","Data":"0f767e47553cce9bc2b8861fdb8e8ccbfa37584c51ec255a1d8e633daa074ecf"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.778400 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.781043 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" event={"ID":"2020ac4a-5a4a-4c38-b667-5432dbf3d891","Type":"ContainerStarted","Data":"e6274b1e97c75f3cffedfea0dc46204046df2470d5bb75c8e864395dfca5e4c4"} Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.781132 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.784947 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerStarted","Data":"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb"} Nov 24 17:05:43 crc kubenswrapper[4768]: E1124 17:05:43.786846 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" podUID="f5b8ba2f-084a-4285-938b-5ffe669a9250" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.803961 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" podStartSLOduration=5.722094845 podStartE2EDuration="27.803943034s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.207094043 +0000 UTC m=+799.454062701" lastFinishedPulling="2025-11-24 17:05:40.288942242 +0000 UTC m=+821.535910890" observedRunningTime="2025-11-24 17:05:43.782896317 +0000 UTC m=+825.029864975" watchObservedRunningTime="2025-11-24 17:05:43.803943034 +0000 UTC m=+825.050911692" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.806093 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" podStartSLOduration=2.621738843 podStartE2EDuration="27.806087593s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.173746724 +0000 UTC m=+799.420715382" lastFinishedPulling="2025-11-24 17:05:43.358095474 +0000 UTC m=+824.605064132" observedRunningTime="2025-11-24 17:05:43.800601281 +0000 UTC m=+825.047569939" watchObservedRunningTime="2025-11-24 17:05:43.806087593 +0000 UTC m=+825.053056251" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.910198 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" podStartSLOduration=6.001390504 podStartE2EDuration="27.910179812s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.380157604 +0000 UTC m=+799.627126252" lastFinishedPulling="2025-11-24 17:05:40.288946902 +0000 UTC m=+821.535915560" observedRunningTime="2025-11-24 17:05:43.885745741 +0000 UTC m=+825.132714399" watchObservedRunningTime="2025-11-24 17:05:43.910179812 +0000 UTC m=+825.157148470" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.939631 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" podStartSLOduration=3.08153139 podStartE2EDuration="27.939613912s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.097909406 +0000 UTC m=+798.344878064" lastFinishedPulling="2025-11-24 17:05:41.955991928 +0000 UTC m=+823.202960586" observedRunningTime="2025-11-24 17:05:43.924043128 +0000 UTC m=+825.171011806" watchObservedRunningTime="2025-11-24 17:05:43.939613912 +0000 UTC m=+825.186582630" Nov 24 17:05:43 crc kubenswrapper[4768]: I1124 17:05:43.989510 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" podStartSLOduration=7.321165764 podStartE2EDuration="27.989488401s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.991540333 +0000 UTC m=+800.238508991" lastFinishedPulling="2025-11-24 17:05:39.65986297 +0000 UTC m=+820.906831628" observedRunningTime="2025-11-24 17:05:43.988285418 +0000 UTC m=+825.235254076" watchObservedRunningTime="2025-11-24 17:05:43.989488401 +0000 UTC m=+825.236457059" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.036741 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" podStartSLOduration=2.724276259 podStartE2EDuration="28.036724697s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.904975128 +0000 UTC m=+799.151943786" lastFinishedPulling="2025-11-24 17:05:43.217423566 +0000 UTC m=+824.464392224" observedRunningTime="2025-11-24 17:05:44.036405468 +0000 UTC m=+825.283374126" watchObservedRunningTime="2025-11-24 17:05:44.036724697 +0000 UTC m=+825.283693355" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.040254 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" podStartSLOduration=3.9059309840000003 podStartE2EDuration="28.040245725s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.716838937 +0000 UTC m=+798.963807595" lastFinishedPulling="2025-11-24 17:05:41.851153678 +0000 UTC m=+823.098122336" observedRunningTime="2025-11-24 17:05:44.018947552 +0000 UTC m=+825.265916210" watchObservedRunningTime="2025-11-24 17:05:44.040245725 +0000 UTC m=+825.287214383" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.059199 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" podStartSLOduration=4.157542912 podStartE2EDuration="28.059176192s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.108944419 +0000 UTC m=+799.355913087" lastFinishedPulling="2025-11-24 17:05:42.010577709 +0000 UTC m=+823.257546367" observedRunningTime="2025-11-24 17:05:44.053731401 +0000 UTC m=+825.300700049" watchObservedRunningTime="2025-11-24 17:05:44.059176192 +0000 UTC m=+825.306144850" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.078903 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" podStartSLOduration=6.385508732 podStartE2EDuration="28.078887101s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.39868519 +0000 UTC m=+799.645653848" lastFinishedPulling="2025-11-24 17:05:40.092063549 +0000 UTC m=+821.339032217" observedRunningTime="2025-11-24 17:05:44.077312827 +0000 UTC m=+825.324281495" watchObservedRunningTime="2025-11-24 17:05:44.078887101 +0000 UTC m=+825.325855759" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.092530 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" podStartSLOduration=2.217288147 podStartE2EDuration="28.092513271s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.341566954 +0000 UTC m=+798.588535612" lastFinishedPulling="2025-11-24 17:05:43.216792078 +0000 UTC m=+824.463760736" observedRunningTime="2025-11-24 17:05:44.090780433 +0000 UTC m=+825.337749101" watchObservedRunningTime="2025-11-24 17:05:44.092513271 +0000 UTC m=+825.339481929" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.109220 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" podStartSLOduration=4.209098479 podStartE2EDuration="28.109200886s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.111157581 +0000 UTC m=+799.358126239" lastFinishedPulling="2025-11-24 17:05:42.011259988 +0000 UTC m=+823.258228646" observedRunningTime="2025-11-24 17:05:44.107137778 +0000 UTC m=+825.354106436" watchObservedRunningTime="2025-11-24 17:05:44.109200886 +0000 UTC m=+825.356169544" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.124075 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" podStartSLOduration=3.860251211 podStartE2EDuration="28.124042279s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.692099438 +0000 UTC m=+798.939068086" lastFinishedPulling="2025-11-24 17:05:41.955890496 +0000 UTC m=+823.202859154" observedRunningTime="2025-11-24 17:05:44.123893695 +0000 UTC m=+825.370862353" watchObservedRunningTime="2025-11-24 17:05:44.124042279 +0000 UTC m=+825.371010937" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.793572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" event={"ID":"d40b5804-6340-4be6-8da4-dca19827c8ee","Type":"ContainerStarted","Data":"33923ce26dfacb66c81870689b22f15a6daa86194714ee4a8c8a5d3372a6e1de"} Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.795012 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.795993 4768 generic.go:334] "Generic (PLEG): container finished" podID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerID="90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb" exitCode=0 Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.796069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerDied","Data":"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb"} Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.798205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerStarted","Data":"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31"} Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.802110 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" event={"ID":"f7e72195-5597-498f-906e-573b0c5c8295","Type":"ContainerStarted","Data":"60be6e6378fda212ea972632b8890a016f345b8ddb88c850cfed76d3be934f6f"} Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.802265 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.817224 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" event={"ID":"e2835f06-b5ce-4170-a4c3-4a08e9cc2815","Type":"ContainerStarted","Data":"f3539836e09f5258493da82e8f2f2031f285b517f6e4a3e9d9d3097e3ba4d671"} Nov 24 17:05:44 crc kubenswrapper[4768]: E1124 17:05:44.822876 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/openstack-k8s-operators/ironic-operator:bd29897380326806c801a45eb708db4292a017c0\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" podUID="cdfcbb97-9f2e-40ab-863a-93e592ee728a" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.827273 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.827318 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.846500 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" podStartSLOduration=2.830341994 podStartE2EDuration="28.846473093s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.367948354 +0000 UTC m=+799.614917022" lastFinishedPulling="2025-11-24 17:05:44.384079463 +0000 UTC m=+825.631048121" observedRunningTime="2025-11-24 17:05:44.844190959 +0000 UTC m=+826.091159617" watchObservedRunningTime="2025-11-24 17:05:44.846473093 +0000 UTC m=+826.093441761" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.863091 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" podStartSLOduration=2.633677235 podStartE2EDuration="28.863076605s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.156686749 +0000 UTC m=+799.403655397" lastFinishedPulling="2025-11-24 17:05:44.386086109 +0000 UTC m=+825.633054767" observedRunningTime="2025-11-24 17:05:44.860902425 +0000 UTC m=+826.107871083" watchObservedRunningTime="2025-11-24 17:05:44.863076605 +0000 UTC m=+826.110045263" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.922867 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kgl4t" podStartSLOduration=18.345144519 podStartE2EDuration="20.92284702s" podCreationTimestamp="2025-11-24 17:05:24 +0000 UTC" firstStartedPulling="2025-11-24 17:05:41.672318017 +0000 UTC m=+822.919286675" lastFinishedPulling="2025-11-24 17:05:44.250020518 +0000 UTC m=+825.496989176" observedRunningTime="2025-11-24 17:05:44.922504631 +0000 UTC m=+826.169473289" watchObservedRunningTime="2025-11-24 17:05:44.92284702 +0000 UTC m=+826.169815668" Nov 24 17:05:44 crc kubenswrapper[4768]: I1124 17:05:44.986254 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" podStartSLOduration=2.453813186 podStartE2EDuration="28.986232406s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.717276329 +0000 UTC m=+798.964244987" lastFinishedPulling="2025-11-24 17:05:44.249695549 +0000 UTC m=+825.496664207" observedRunningTime="2025-11-24 17:05:44.982113921 +0000 UTC m=+826.229082579" watchObservedRunningTime="2025-11-24 17:05:44.986232406 +0000 UTC m=+826.233201074" Nov 24 17:05:45 crc kubenswrapper[4768]: I1124 17:05:45.886845 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kgl4t" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="registry-server" probeResult="failure" output=< Nov 24 17:05:45 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:05:45 crc kubenswrapper[4768]: > Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.404621 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-4xg49" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.519060 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-fxzrc" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.598898 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-6smrr" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.652078 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.800928 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-dsjtl" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.834184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xv4wf" Nov 24 17:05:46 crc kubenswrapper[4768]: I1124 17:05:46.897432 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6nh25" Nov 24 17:05:47 crc kubenswrapper[4768]: I1124 17:05:47.072231 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-v2hfk" Nov 24 17:05:47 crc kubenswrapper[4768]: I1124 17:05:47.155285 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-9x7r8" Nov 24 17:05:47 crc kubenswrapper[4768]: I1124 17:05:47.314973 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-hvvsp" Nov 24 17:05:47 crc kubenswrapper[4768]: I1124 17:05:47.446898 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-b2r7j" Nov 24 17:05:48 crc kubenswrapper[4768]: I1124 17:05:48.505820 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g" Nov 24 17:05:51 crc kubenswrapper[4768]: I1124 17:05:51.117306 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-56fcd5b457-nhnr6" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.397930 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:05:54 crc kubenswrapper[4768]: E1124 17:05:54.398666 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="registry-server" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.398795 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="registry-server" Nov 24 17:05:54 crc kubenswrapper[4768]: E1124 17:05:54.398891 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="extract-utilities" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.398902 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="extract-utilities" Nov 24 17:05:54 crc kubenswrapper[4768]: E1124 17:05:54.398922 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="extract-content" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.398931 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="extract-content" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.399132 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d811f60-5a0c-4baa-bfa7-4a3e2a5cc2c6" containerName="registry-server" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.401059 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.403747 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.563949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.564724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2stx\" (UniqueName: \"kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.564998 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.666798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2stx\" (UniqueName: \"kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.666900 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.666980 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.667709 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.667834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.692500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2stx\" (UniqueName: \"kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx\") pod \"community-operators-rvdbc\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.769073 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.908609 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:54 crc kubenswrapper[4768]: I1124 17:05:54.975635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:55 crc kubenswrapper[4768]: I1124 17:05:55.232577 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:05:55 crc kubenswrapper[4768]: W1124 17:05:55.234794 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod623031aa_d897_417f_9aa1_15fe1810baa9.slice/crio-6729bf955840344e344326231f59caeab311d1f9b827d77013be1f69cbc8fe54 WatchSource:0}: Error finding container 6729bf955840344e344326231f59caeab311d1f9b827d77013be1f69cbc8fe54: Status 404 returned error can't find the container with id 6729bf955840344e344326231f59caeab311d1f9b827d77013be1f69cbc8fe54 Nov 24 17:05:55 crc kubenswrapper[4768]: I1124 17:05:55.909579 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerStarted","Data":"6729bf955840344e344326231f59caeab311d1f9b827d77013be1f69cbc8fe54"} Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.427528 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-gnzjb" Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.596738 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jfk9g" Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.657339 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-d9crw" Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.682853 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-zsr4q" Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.767525 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.884842 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-9sgvb" Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.918555 4768 generic.go:334] "Generic (PLEG): container finished" podID="623031aa-d897-417f-9aa1-15fe1810baa9" containerID="95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621" exitCode=0 Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.918629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerDied","Data":"95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621"} Nov 24 17:05:56 crc kubenswrapper[4768]: I1124 17:05:56.918892 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kgl4t" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="registry-server" containerID="cri-o://6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31" gracePeriod=2 Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.403122 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-gtc95" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.911918 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.935818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities\") pod \"b4a92cd5-0847-4380-8530-9c9892f7b443\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.935946 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content\") pod \"b4a92cd5-0847-4380-8530-9c9892f7b443\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.935982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwhq\" (UniqueName: \"kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq\") pod \"b4a92cd5-0847-4380-8530-9c9892f7b443\" (UID: \"b4a92cd5-0847-4380-8530-9c9892f7b443\") " Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.936933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities" (OuterVolumeSpecName: "utilities") pod "b4a92cd5-0847-4380-8530-9c9892f7b443" (UID: "b4a92cd5-0847-4380-8530-9c9892f7b443"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.938198 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerID="6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31" exitCode=0 Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.938291 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerDied","Data":"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.938331 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgl4t" event={"ID":"b4a92cd5-0847-4380-8530-9c9892f7b443","Type":"ContainerDied","Data":"f348be5d5c093f440ddc238018a66ee2881679b93fbe06bc25fa50d2f198fddb"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.938798 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgl4t" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.939093 4768 scope.go:117] "RemoveContainer" containerID="6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.948603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq" (OuterVolumeSpecName: "kube-api-access-9rwhq") pod "b4a92cd5-0847-4380-8530-9c9892f7b443" (UID: "b4a92cd5-0847-4380-8530-9c9892f7b443"). InnerVolumeSpecName "kube-api-access-9rwhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.948878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" event={"ID":"e2f173d4-03f8-44b0-b05f-3dfd845569e8","Type":"ContainerStarted","Data":"3391037cea79587ca228af3da7b68340cb582210333fe5fae87a96088e233234"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.949146 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.953149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" event={"ID":"cdfcbb97-9f2e-40ab-863a-93e592ee728a","Type":"ContainerStarted","Data":"ce50de204f66e835694c15cbc2febe226cc559976a3a81a6c24ea91f944673e1"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.954321 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.960862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerStarted","Data":"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.967645 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" event={"ID":"27ed9b45-b076-4104-a661-bc231021ae5b","Type":"ContainerStarted","Data":"45633b24d15958c4a5c3e60a3d3272aeebb89619e5addd5e099912a4d1310018"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.967965 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.969305 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" podStartSLOduration=5.683086847 podStartE2EDuration="41.969219847s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.178834396 +0000 UTC m=+799.425803054" lastFinishedPulling="2025-11-24 17:05:54.464967396 +0000 UTC m=+835.711936054" observedRunningTime="2025-11-24 17:05:57.967164749 +0000 UTC m=+839.214133407" watchObservedRunningTime="2025-11-24 17:05:57.969219847 +0000 UTC m=+839.216188515" Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.969508 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerStarted","Data":"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93"} Nov 24 17:05:57 crc kubenswrapper[4768]: I1124 17:05:57.987449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4a92cd5-0847-4380-8530-9c9892f7b443" (UID: "b4a92cd5-0847-4380-8530-9c9892f7b443"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.004044 4768 scope.go:117] "RemoveContainer" containerID="9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.005993 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" podStartSLOduration=2.057088004 podStartE2EDuration="42.0059705s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.692122758 +0000 UTC m=+798.939091426" lastFinishedPulling="2025-11-24 17:05:57.641005254 +0000 UTC m=+838.887973922" observedRunningTime="2025-11-24 17:05:58.003895663 +0000 UTC m=+839.250864321" watchObservedRunningTime="2025-11-24 17:05:58.0059705 +0000 UTC m=+839.252939158" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.027374 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" podStartSLOduration=5.758066545 podStartE2EDuration="42.027334425s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:18.195266254 +0000 UTC m=+799.442234912" lastFinishedPulling="2025-11-24 17:05:54.464534134 +0000 UTC m=+835.711502792" observedRunningTime="2025-11-24 17:05:58.024763484 +0000 UTC m=+839.271732142" watchObservedRunningTime="2025-11-24 17:05:58.027334425 +0000 UTC m=+839.274303083" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.034595 4768 scope.go:117] "RemoveContainer" containerID="21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.037410 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.037434 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4a92cd5-0847-4380-8530-9c9892f7b443-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.037444 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rwhq\" (UniqueName: \"kubernetes.io/projected/b4a92cd5-0847-4380-8530-9c9892f7b443-kube-api-access-9rwhq\") on node \"crc\" DevicePath \"\"" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.045494 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cq696" podStartSLOduration=26.261684698 podStartE2EDuration="39.045472231s" podCreationTimestamp="2025-11-24 17:05:19 +0000 UTC" firstStartedPulling="2025-11-24 17:05:41.681821291 +0000 UTC m=+822.928789949" lastFinishedPulling="2025-11-24 17:05:54.465608824 +0000 UTC m=+835.712577482" observedRunningTime="2025-11-24 17:05:58.040831471 +0000 UTC m=+839.287800149" watchObservedRunningTime="2025-11-24 17:05:58.045472231 +0000 UTC m=+839.292440889" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.053327 4768 scope.go:117] "RemoveContainer" containerID="6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31" Nov 24 17:05:58 crc kubenswrapper[4768]: E1124 17:05:58.053920 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31\": container with ID starting with 6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31 not found: ID does not exist" containerID="6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.053974 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31"} err="failed to get container status \"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31\": rpc error: code = NotFound desc = could not find container \"6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31\": container with ID starting with 6a9b09784fadd56e30f88c460385a28a083913d0583e29b96ca8ed4c131cba31 not found: ID does not exist" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.054012 4768 scope.go:117] "RemoveContainer" containerID="9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47" Nov 24 17:05:58 crc kubenswrapper[4768]: E1124 17:05:58.054414 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47\": container with ID starting with 9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47 not found: ID does not exist" containerID="9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.054445 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47"} err="failed to get container status \"9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47\": rpc error: code = NotFound desc = could not find container \"9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47\": container with ID starting with 9a2361e08448dff2c37c91b18c2ed7a1f3963b15b4cc5a8e9057e798be143e47 not found: ID does not exist" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.054463 4768 scope.go:117] "RemoveContainer" containerID="21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af" Nov 24 17:05:58 crc kubenswrapper[4768]: E1124 17:05:58.054843 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af\": container with ID starting with 21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af not found: ID does not exist" containerID="21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.054878 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af"} err="failed to get container status \"21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af\": rpc error: code = NotFound desc = could not find container \"21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af\": container with ID starting with 21b2830d0971f89b3515d18c673785bd84da2a091563f4341934dd4ff5f7e9af not found: ID does not exist" Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.273168 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.278979 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kgl4t"] Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.978739 4768 generic.go:334] "Generic (PLEG): container finished" podID="623031aa-d897-417f-9aa1-15fe1810baa9" containerID="db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f" exitCode=0 Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.978835 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerDied","Data":"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f"} Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.979136 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerStarted","Data":"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde"} Nov 24 17:05:58 crc kubenswrapper[4768]: I1124 17:05:58.981202 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" event={"ID":"f5b8ba2f-084a-4285-938b-5ffe669a9250","Type":"ContainerStarted","Data":"b2e617460e301c3bc4735a06d1b66b6b36be61bd78ac8c7b980399c6d43fc116"} Nov 24 17:05:59 crc kubenswrapper[4768]: I1124 17:05:59.004999 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rvdbc" podStartSLOduration=3.185753713 podStartE2EDuration="5.004975657s" podCreationTimestamp="2025-11-24 17:05:54 +0000 UTC" firstStartedPulling="2025-11-24 17:05:56.920596317 +0000 UTC m=+838.167564975" lastFinishedPulling="2025-11-24 17:05:58.739818261 +0000 UTC m=+839.986786919" observedRunningTime="2025-11-24 17:05:58.997386036 +0000 UTC m=+840.244354714" watchObservedRunningTime="2025-11-24 17:05:59.004975657 +0000 UTC m=+840.251944325" Nov 24 17:05:59 crc kubenswrapper[4768]: I1124 17:05:59.017959 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" podStartSLOduration=2.098624842 podStartE2EDuration="43.017939768s" podCreationTimestamp="2025-11-24 17:05:16 +0000 UTC" firstStartedPulling="2025-11-24 17:05:17.34573851 +0000 UTC m=+798.592707168" lastFinishedPulling="2025-11-24 17:05:58.265053436 +0000 UTC m=+839.512022094" observedRunningTime="2025-11-24 17:05:59.015926582 +0000 UTC m=+840.262895260" watchObservedRunningTime="2025-11-24 17:05:59.017939768 +0000 UTC m=+840.264908446" Nov 24 17:05:59 crc kubenswrapper[4768]: I1124 17:05:59.598609 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" path="/var/lib/kubelet/pods/b4a92cd5-0847-4380-8530-9c9892f7b443/volumes" Nov 24 17:05:59 crc kubenswrapper[4768]: I1124 17:05:59.621302 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:05:59 crc kubenswrapper[4768]: I1124 17:05:59.621385 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:06:00 crc kubenswrapper[4768]: I1124 17:06:00.670381 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cq696" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="registry-server" probeResult="failure" output=< Nov 24 17:06:00 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:06:00 crc kubenswrapper[4768]: > Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.769960 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.770270 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.841705 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.893269 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.893326 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.893397 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.894150 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:06:04 crc kubenswrapper[4768]: I1124 17:06:04.894220 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85" gracePeriod=600 Nov 24 17:06:05 crc kubenswrapper[4768]: I1124 17:06:05.032528 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85" exitCode=0 Nov 24 17:06:05 crc kubenswrapper[4768]: I1124 17:06:05.032639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85"} Nov 24 17:06:05 crc kubenswrapper[4768]: I1124 17:06:05.032854 4768 scope.go:117] "RemoveContainer" containerID="6ca92bad52ab5f1c01d70ab976d6cd2ca8cb33df2eb005d50a6ec3e7eded09d6" Nov 24 17:06:05 crc kubenswrapper[4768]: I1124 17:06:05.075275 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:05 crc kubenswrapper[4768]: I1124 17:06:05.121726 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:06:06 crc kubenswrapper[4768]: I1124 17:06:06.043576 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7"} Nov 24 17:06:06 crc kubenswrapper[4768]: I1124 17:06:06.482955 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:06:06 crc kubenswrapper[4768]: I1124 17:06:06.484952 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-jdszs" Nov 24 17:06:06 crc kubenswrapper[4768]: I1124 17:06:06.676111 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-58fc45656d-mlqr9" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.051910 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rvdbc" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="registry-server" containerID="cri-o://77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde" gracePeriod=2 Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.158849 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-qnqvs" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.244607 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-x24f2" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.539915 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.727523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2stx\" (UniqueName: \"kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx\") pod \"623031aa-d897-417f-9aa1-15fe1810baa9\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.727870 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content\") pod \"623031aa-d897-417f-9aa1-15fe1810baa9\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.727932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities\") pod \"623031aa-d897-417f-9aa1-15fe1810baa9\" (UID: \"623031aa-d897-417f-9aa1-15fe1810baa9\") " Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.729065 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities" (OuterVolumeSpecName: "utilities") pod "623031aa-d897-417f-9aa1-15fe1810baa9" (UID: "623031aa-d897-417f-9aa1-15fe1810baa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.733970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx" (OuterVolumeSpecName: "kube-api-access-d2stx") pod "623031aa-d897-417f-9aa1-15fe1810baa9" (UID: "623031aa-d897-417f-9aa1-15fe1810baa9"). InnerVolumeSpecName "kube-api-access-d2stx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.792566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "623031aa-d897-417f-9aa1-15fe1810baa9" (UID: "623031aa-d897-417f-9aa1-15fe1810baa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.829428 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2stx\" (UniqueName: \"kubernetes.io/projected/623031aa-d897-417f-9aa1-15fe1810baa9-kube-api-access-d2stx\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.829458 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:07 crc kubenswrapper[4768]: I1124 17:06:07.829468 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623031aa-d897-417f-9aa1-15fe1810baa9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.061955 4768 generic.go:334] "Generic (PLEG): container finished" podID="623031aa-d897-417f-9aa1-15fe1810baa9" containerID="77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde" exitCode=0 Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.062113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerDied","Data":"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde"} Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.062179 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rvdbc" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.062570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rvdbc" event={"ID":"623031aa-d897-417f-9aa1-15fe1810baa9","Type":"ContainerDied","Data":"6729bf955840344e344326231f59caeab311d1f9b827d77013be1f69cbc8fe54"} Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.062595 4768 scope.go:117] "RemoveContainer" containerID="77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.093917 4768 scope.go:117] "RemoveContainer" containerID="db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.095091 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.099464 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rvdbc"] Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.123945 4768 scope.go:117] "RemoveContainer" containerID="95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.141268 4768 scope.go:117] "RemoveContainer" containerID="77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde" Nov 24 17:06:08 crc kubenswrapper[4768]: E1124 17:06:08.141652 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde\": container with ID starting with 77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde not found: ID does not exist" containerID="77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.141697 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde"} err="failed to get container status \"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde\": rpc error: code = NotFound desc = could not find container \"77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde\": container with ID starting with 77843fa5a7396afcb5f64f606075f67313d2d1ce234cfd499dca26eff3939fde not found: ID does not exist" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.141726 4768 scope.go:117] "RemoveContainer" containerID="db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f" Nov 24 17:06:08 crc kubenswrapper[4768]: E1124 17:06:08.142030 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f\": container with ID starting with db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f not found: ID does not exist" containerID="db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.142092 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f"} err="failed to get container status \"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f\": rpc error: code = NotFound desc = could not find container \"db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f\": container with ID starting with db7ed2327686574540e8c47d1cea1f5b0ae49d799c296a77ed45def3e318b62f not found: ID does not exist" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.142147 4768 scope.go:117] "RemoveContainer" containerID="95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621" Nov 24 17:06:08 crc kubenswrapper[4768]: E1124 17:06:08.142577 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621\": container with ID starting with 95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621 not found: ID does not exist" containerID="95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621" Nov 24 17:06:08 crc kubenswrapper[4768]: I1124 17:06:08.142616 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621"} err="failed to get container status \"95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621\": rpc error: code = NotFound desc = could not find container \"95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621\": container with ID starting with 95027dc55130f4649970ac8d28427871b953130035110e064dc50ba831d3b621 not found: ID does not exist" Nov 24 17:06:09 crc kubenswrapper[4768]: I1124 17:06:09.591165 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" path="/var/lib/kubelet/pods/623031aa-d897-417f-9aa1-15fe1810baa9/volumes" Nov 24 17:06:09 crc kubenswrapper[4768]: I1124 17:06:09.681753 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:06:09 crc kubenswrapper[4768]: I1124 17:06:09.734182 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:06:10 crc kubenswrapper[4768]: I1124 17:06:10.477046 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.090788 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cq696" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="registry-server" containerID="cri-o://53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93" gracePeriod=2 Nov 24 17:06:11 crc kubenswrapper[4768]: E1124 17:06:11.165210 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90fcab37_b99c_486d_a48b_059f3d28a3ee.slice/crio-53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93.scope\": RecentStats: unable to find data in memory cache]" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.506184 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.689464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities\") pod \"90fcab37-b99c-486d-a48b-059f3d28a3ee\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.689526 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zq7v\" (UniqueName: \"kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v\") pod \"90fcab37-b99c-486d-a48b-059f3d28a3ee\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.689550 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content\") pod \"90fcab37-b99c-486d-a48b-059f3d28a3ee\" (UID: \"90fcab37-b99c-486d-a48b-059f3d28a3ee\") " Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.691370 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities" (OuterVolumeSpecName: "utilities") pod "90fcab37-b99c-486d-a48b-059f3d28a3ee" (UID: "90fcab37-b99c-486d-a48b-059f3d28a3ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.698645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v" (OuterVolumeSpecName: "kube-api-access-6zq7v") pod "90fcab37-b99c-486d-a48b-059f3d28a3ee" (UID: "90fcab37-b99c-486d-a48b-059f3d28a3ee"). InnerVolumeSpecName "kube-api-access-6zq7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.773227 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90fcab37-b99c-486d-a48b-059f3d28a3ee" (UID: "90fcab37-b99c-486d-a48b-059f3d28a3ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.790916 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.790965 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zq7v\" (UniqueName: \"kubernetes.io/projected/90fcab37-b99c-486d-a48b-059f3d28a3ee-kube-api-access-6zq7v\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:11 crc kubenswrapper[4768]: I1124 17:06:11.790983 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fcab37-b99c-486d-a48b-059f3d28a3ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.100784 4768 generic.go:334] "Generic (PLEG): container finished" podID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerID="53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93" exitCode=0 Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.100847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerDied","Data":"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93"} Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.100885 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cq696" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.100898 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cq696" event={"ID":"90fcab37-b99c-486d-a48b-059f3d28a3ee","Type":"ContainerDied","Data":"e44d05b994c6c8870acfe64510287d201feab1a8984a3986cdf944f2acd010e0"} Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.100915 4768 scope.go:117] "RemoveContainer" containerID="53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.126040 4768 scope.go:117] "RemoveContainer" containerID="90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.152091 4768 scope.go:117] "RemoveContainer" containerID="39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.158387 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.169418 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cq696"] Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.194510 4768 scope.go:117] "RemoveContainer" containerID="53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93" Nov 24 17:06:12 crc kubenswrapper[4768]: E1124 17:06:12.194969 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93\": container with ID starting with 53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93 not found: ID does not exist" containerID="53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.195038 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93"} err="failed to get container status \"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93\": rpc error: code = NotFound desc = could not find container \"53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93\": container with ID starting with 53ffcbdf46e0d22e81b4dc5fc2f35a2a1e5cdf1e3dad1bb8fba451c566290f93 not found: ID does not exist" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.195075 4768 scope.go:117] "RemoveContainer" containerID="90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb" Nov 24 17:06:12 crc kubenswrapper[4768]: E1124 17:06:12.195435 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb\": container with ID starting with 90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb not found: ID does not exist" containerID="90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.195478 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb"} err="failed to get container status \"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb\": rpc error: code = NotFound desc = could not find container \"90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb\": container with ID starting with 90cf5d41e0bec27d55beb344fdcf50aa3142018c513d1ce5ba8fe724529a8fcb not found: ID does not exist" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.195500 4768 scope.go:117] "RemoveContainer" containerID="39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3" Nov 24 17:06:12 crc kubenswrapper[4768]: E1124 17:06:12.195700 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3\": container with ID starting with 39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3 not found: ID does not exist" containerID="39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3" Nov 24 17:06:12 crc kubenswrapper[4768]: I1124 17:06:12.195726 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3"} err="failed to get container status \"39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3\": rpc error: code = NotFound desc = could not find container \"39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3\": container with ID starting with 39799a605116639d882aba40ec63ece8537e13ac5b0b6344af0b47dc1bfd3ba3 not found: ID does not exist" Nov 24 17:06:13 crc kubenswrapper[4768]: I1124 17:06:13.594267 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" path="/var/lib/kubelet/pods/90fcab37-b99c-486d-a48b-059f3d28a3ee/volumes" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.123621 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124401 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124436 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124463 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124469 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124485 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124492 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124514 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124526 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124531 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124542 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124548 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124557 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124563 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="extract-utilities" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124577 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124583 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: E1124 17:06:23.124598 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124603 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="extract-content" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124731 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="623031aa-d897-417f-9aa1-15fe1810baa9" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124744 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fcab37-b99c-486d-a48b-059f3d28a3ee" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.124753 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a92cd5-0847-4380-8530-9c9892f7b443" containerName="registry-server" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.125552 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.129331 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.129437 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-hjfsm" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.129444 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.129441 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.129608 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.142469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.274988 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.275084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.275166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4gg\" (UniqueName: \"kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.376043 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4gg\" (UniqueName: \"kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.376121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.376177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.377030 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.377092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.395114 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4gg\" (UniqueName: \"kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg\") pod \"dnsmasq-dns-78dd6ddcc-v7ntw\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.444904 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.874314 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:23 crc kubenswrapper[4768]: I1124 17:06:23.879704 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:06:24 crc kubenswrapper[4768]: I1124 17:06:24.209435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" event={"ID":"098ec293-0087-4407-96d2-f79512483a1a","Type":"ContainerStarted","Data":"71ded9405ca10ac73fca067c60dfa2a48fe8884abd29e2c8f89e5bfd3f433bac"} Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.089708 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.092004 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.099169 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.220756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.220839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.220892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdb24\" (UniqueName: \"kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.322036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.322108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.322161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdb24\" (UniqueName: \"kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.323410 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.323903 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.344724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdb24\" (UniqueName: \"kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24\") pod \"dnsmasq-dns-666b6646f7-zwf2f\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.420765 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.433132 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.471756 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.472903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.485851 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.625552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.625737 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmqv\" (UniqueName: \"kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.625800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.727497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsmqv\" (UniqueName: \"kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.727577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.727599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.728577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.728729 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.747718 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsmqv\" (UniqueName: \"kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv\") pod \"dnsmasq-dns-57d769cc4f-kx42s\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:26 crc kubenswrapper[4768]: I1124 17:06:26.787184 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.277127 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.278677 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.280970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.281195 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.281452 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.281646 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l4ftf" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.283052 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.283353 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.283503 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.293565 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436804 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436840 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.436980 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.437010 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.437034 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.437156 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxfm7\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.437247 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539291 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539337 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539425 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539574 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539603 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539637 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxfm7\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539641 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.539991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.541415 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.542282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.542542 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.542641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.545812 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.548291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.549570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.551704 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.560217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxfm7\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.577401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.610677 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.612730 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.614664 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.616525 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.616921 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-g9ntq" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.617212 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.617335 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.617608 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.617913 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.618063 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.618768 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746246 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746291 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqdh\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.746912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848500 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848576 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848593 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqdh\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.848625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.849595 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.849765 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.850306 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.850549 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.851401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.853132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.863277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.864471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.864466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.879059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.883429 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqdh\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.884802 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:27 crc kubenswrapper[4768]: I1124 17:06:27.969565 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.934739 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.944984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.947398 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.947675 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.948625 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rb2vh" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.949066 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.957039 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 17:06:28 crc kubenswrapper[4768]: I1124 17:06:28.995860 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098569 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098721 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098789 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098811 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjdcg\" (UniqueName: \"kubernetes.io/projected/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kube-api-access-mjdcg\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.098911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200607 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjdcg\" (UniqueName: \"kubernetes.io/projected/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kube-api-access-mjdcg\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200959 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.200990 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.201618 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.201646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.201870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.202959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.203885 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d0c08ff-07c5-42d9-bbd4-77169f98868a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.207077 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.214132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0c08ff-07c5-42d9-bbd4-77169f98868a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.216119 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjdcg\" (UniqueName: \"kubernetes.io/projected/0d0c08ff-07c5-42d9-bbd4-77169f98868a-kube-api-access-mjdcg\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.226110 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"0d0c08ff-07c5-42d9-bbd4-77169f98868a\") " pod="openstack/openstack-galera-0" Nov 24 17:06:29 crc kubenswrapper[4768]: I1124 17:06:29.348291 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.347728 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.351173 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.353760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-cb7zx" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.353932 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.355440 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.355611 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.362805 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421308 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421374 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421419 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421436 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.421488 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhrd\" (UniqueName: \"kubernetes.io/projected/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kube-api-access-mfhrd\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.446116 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.447130 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.451706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.453481 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.453486 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.454374 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-47swf" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.522592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.522951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtds\" (UniqueName: \"kubernetes.io/projected/bfe18146-b6db-422b-965f-8b22d4943e4f-kube-api-access-shtds\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523307 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523297 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-kolla-config\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.523751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524138 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-config-data\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfhrd\" (UniqueName: \"kubernetes.io/projected/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kube-api-access-mfhrd\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.524952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.525237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.526033 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.544959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.545129 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.548861 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfhrd\" (UniqueName: \"kubernetes.io/projected/f5fda78c-6764-4dfb-837a-b9e48ff5bea8-kube-api-access-mfhrd\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.553144 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5fda78c-6764-4dfb-837a-b9e48ff5bea8\") " pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.626476 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtds\" (UniqueName: \"kubernetes.io/projected/bfe18146-b6db-422b-965f-8b22d4943e4f-kube-api-access-shtds\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.626537 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.626614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-kolla-config\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.626683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-config-data\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.626722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.627637 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-config-data\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.628108 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bfe18146-b6db-422b-965f-8b22d4943e4f-kolla-config\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.630701 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.631087 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfe18146-b6db-422b-965f-8b22d4943e4f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.648222 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtds\" (UniqueName: \"kubernetes.io/projected/bfe18146-b6db-422b-965f-8b22d4943e4f-kube-api-access-shtds\") pod \"memcached-0\" (UID: \"bfe18146-b6db-422b-965f-8b22d4943e4f\") " pod="openstack/memcached-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.750140 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 17:06:30 crc kubenswrapper[4768]: I1124 17:06:30.778872 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.168655 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.339259 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.340576 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.344548 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-w2rz7" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.354018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.470542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5r2b\" (UniqueName: \"kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b\") pod \"kube-state-metrics-0\" (UID: \"866ab349-cb74-4f16-9927-87eb7f5af5b8\") " pod="openstack/kube-state-metrics-0" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.572658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5r2b\" (UniqueName: \"kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b\") pod \"kube-state-metrics-0\" (UID: \"866ab349-cb74-4f16-9927-87eb7f5af5b8\") " pod="openstack/kube-state-metrics-0" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.595284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5r2b\" (UniqueName: \"kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b\") pod \"kube-state-metrics-0\" (UID: \"866ab349-cb74-4f16-9927-87eb7f5af5b8\") " pod="openstack/kube-state-metrics-0" Nov 24 17:06:32 crc kubenswrapper[4768]: I1124 17:06:32.665659 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.007608 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8j94t"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.008947 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.010842 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.011537 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.011860 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bpplp" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.020873 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zpbbq"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.022677 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.032131 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8j94t"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.048709 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zpbbq"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.058878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrq7\" (UniqueName: \"kubernetes.io/projected/40425fc1-a61b-4da7-95a4-262b16a8020f-kube-api-access-rzrq7\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.058929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-log-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-run\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkwqz\" (UniqueName: \"kubernetes.io/projected/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-kube-api-access-wkwqz\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059071 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-log\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059201 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40425fc1-a61b-4da7-95a4-262b16a8020f-scripts\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059260 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-combined-ca-bundle\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-lib\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-etc-ovs\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-scripts\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.059432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-ovn-controller-tls-certs\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.160922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-combined-ca-bundle\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.160990 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-lib\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161032 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-etc-ovs\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-scripts\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161066 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-ovn-controller-tls-certs\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrq7\" (UniqueName: \"kubernetes.io/projected/40425fc1-a61b-4da7-95a4-262b16a8020f-kube-api-access-rzrq7\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161118 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-log-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-run\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-etc-ovs\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-lib\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-log-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.161942 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-run\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.163128 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-scripts\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.163774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkwqz\" (UniqueName: \"kubernetes.io/projected/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-kube-api-access-wkwqz\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.163980 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-log\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164019 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40425fc1-a61b-4da7-95a4-262b16a8020f-scripts\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164051 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164079 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164228 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run-ovn\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164019 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/40425fc1-a61b-4da7-95a4-262b16a8020f-var-log\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.164319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-var-run\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.166395 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/40425fc1-a61b-4da7-95a4-262b16a8020f-scripts\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.170977 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-combined-ca-bundle\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.172906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-ovn-controller-tls-certs\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.177481 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrq7\" (UniqueName: \"kubernetes.io/projected/40425fc1-a61b-4da7-95a4-262b16a8020f-kube-api-access-rzrq7\") pod \"ovn-controller-ovs-zpbbq\" (UID: \"40425fc1-a61b-4da7-95a4-262b16a8020f\") " pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.179479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkwqz\" (UniqueName: \"kubernetes.io/projected/df42583a-33cf-4b89-9f69-7f3baeb6e7b5-kube-api-access-wkwqz\") pod \"ovn-controller-8j94t\" (UID: \"df42583a-33cf-4b89-9f69-7f3baeb6e7b5\") " pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.325922 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.339031 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.893977 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.916702 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.918192 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.920702 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.921033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6ct8v" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.922599 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.922769 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.923124 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.925588 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 17:06:36 crc kubenswrapper[4768]: I1124 17:06:36.977493 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082710 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082761 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqn9d\" (UniqueName: \"kubernetes.io/projected/f4ae8da1-9449-46bf-8e88-fc42708e6c53-kube-api-access-bqn9d\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082783 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082820 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082851 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082881 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.082922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184116 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqn9d\" (UniqueName: \"kubernetes.io/projected/f4ae8da1-9449-46bf-8e88-fc42708e6c53-kube-api-access-bqn9d\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184240 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184270 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184363 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184771 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.184891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.185311 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.185367 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4ae8da1-9449-46bf-8e88-fc42708e6c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.191588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.191612 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.204267 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqn9d\" (UniqueName: \"kubernetes.io/projected/f4ae8da1-9449-46bf-8e88-fc42708e6c53-kube-api-access-bqn9d\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.209947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4ae8da1-9449-46bf-8e88-fc42708e6c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.212142 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"f4ae8da1-9449-46bf-8e88-fc42708e6c53\") " pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.243934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:37 crc kubenswrapper[4768]: W1124 17:06:37.298486 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fcab967_8d79_401f_927b_8770680c9c30.slice/crio-8e212cf564c79cb2536b0aacd15fa20da8e915a55a75345fdb1e4b16e522642a WatchSource:0}: Error finding container 8e212cf564c79cb2536b0aacd15fa20da8e915a55a75345fdb1e4b16e522642a: Status 404 returned error can't find the container with id 8e212cf564c79cb2536b0aacd15fa20da8e915a55a75345fdb1e4b16e522642a Nov 24 17:06:37 crc kubenswrapper[4768]: W1124 17:06:37.301956 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a9201c_6219_4268_bca9_b285b28d1c52.slice/crio-13a2e81a92a7684a6b01588939abb10b839dcca233c37c36760d31a8a5725536 WatchSource:0}: Error finding container 13a2e81a92a7684a6b01588939abb10b839dcca233c37c36760d31a8a5725536: Status 404 returned error can't find the container with id 13a2e81a92a7684a6b01588939abb10b839dcca233c37c36760d31a8a5725536 Nov 24 17:06:37 crc kubenswrapper[4768]: E1124 17:06:37.327067 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 17:06:37 crc kubenswrapper[4768]: E1124 17:06:37.327443 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt4gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-v7ntw_openstack(098ec293-0087-4407-96d2-f79512483a1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:06:37 crc kubenswrapper[4768]: E1124 17:06:37.332557 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" podUID="098ec293-0087-4407-96d2-f79512483a1a" Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.338138 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" event={"ID":"80a9201c-6219-4268-bca9-b285b28d1c52","Type":"ContainerStarted","Data":"13a2e81a92a7684a6b01588939abb10b839dcca233c37c36760d31a8a5725536"} Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.345150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerStarted","Data":"8e212cf564c79cb2536b0aacd15fa20da8e915a55a75345fdb1e4b16e522642a"} Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.347283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerStarted","Data":"592b29d157f1ff5827edde83f5779bb6e88274a1a27acf2e0569a1876ee1e688"} Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.820891 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.830428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 17:06:37 crc kubenswrapper[4768]: I1124 17:06:37.960605 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 17:06:37 crc kubenswrapper[4768]: W1124 17:06:37.967248 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d0c08ff_07c5_42d9_bbd4_77169f98868a.slice/crio-058b83c4627a3a481a9a3e06c9e765636c76fcc33ddccf4b3ab8e2a1ff433298 WatchSource:0}: Error finding container 058b83c4627a3a481a9a3e06c9e765636c76fcc33ddccf4b3ab8e2a1ff433298: Status 404 returned error can't find the container with id 058b83c4627a3a481a9a3e06c9e765636c76fcc33ddccf4b3ab8e2a1ff433298 Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.087788 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8j94t"] Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.095185 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.199743 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 17:06:38 crc kubenswrapper[4768]: W1124 17:06:38.210320 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4ae8da1_9449_46bf_8e88_fc42708e6c53.slice/crio-93975efa29ece0431bbabfd3f09006a55b16456d3c735c48e7f72648a4c5e5ca WatchSource:0}: Error finding container 93975efa29ece0431bbabfd3f09006a55b16456d3c735c48e7f72648a4c5e5ca: Status 404 returned error can't find the container with id 93975efa29ece0431bbabfd3f09006a55b16456d3c735c48e7f72648a4c5e5ca Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.345317 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:06:38 crc kubenswrapper[4768]: W1124 17:06:38.353829 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866ab349_cb74_4f16_9927_87eb7f5af5b8.slice/crio-ad26194ec6f9c128d7f19d4a27ea14aba49f7b19acaf9ff40fc0cc30bd0c78bf WatchSource:0}: Error finding container ad26194ec6f9c128d7f19d4a27ea14aba49f7b19acaf9ff40fc0cc30bd0c78bf: Status 404 returned error can't find the container with id ad26194ec6f9c128d7f19d4a27ea14aba49f7b19acaf9ff40fc0cc30bd0c78bf Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.363882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" event={"ID":"adec42ae-7642-46b5-abc6-492f3ceb1c14","Type":"ContainerStarted","Data":"4da537de70f0cc650297e8850ee74ab1581df7a9eb58160b2442373f7ce33b12"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.365530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5fda78c-6764-4dfb-837a-b9e48ff5bea8","Type":"ContainerStarted","Data":"b7e3f7db7b43673aa250693d45a92283e32872341b4143c641169fc327c1e92d"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.368718 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4ae8da1-9449-46bf-8e88-fc42708e6c53","Type":"ContainerStarted","Data":"93975efa29ece0431bbabfd3f09006a55b16456d3c735c48e7f72648a4c5e5ca"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.370093 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d0c08ff-07c5-42d9-bbd4-77169f98868a","Type":"ContainerStarted","Data":"058b83c4627a3a481a9a3e06c9e765636c76fcc33ddccf4b3ab8e2a1ff433298"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.371548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bfe18146-b6db-422b-965f-8b22d4943e4f","Type":"ContainerStarted","Data":"84e69598f1b08ddb08fc2d78a6d60869d1c5ac517fce0d897988a8e1006b91a1"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.372645 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8j94t" event={"ID":"df42583a-33cf-4b89-9f69-7f3baeb6e7b5","Type":"ContainerStarted","Data":"c5aa3e2cc5d0695725224b172d0744c07ebb11ea62b285e7bb50c255aa101228"} Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.678630 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.774838 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zpbbq"] Nov 24 17:06:38 crc kubenswrapper[4768]: W1124 17:06:38.785413 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40425fc1_a61b_4da7_95a4_262b16a8020f.slice/crio-751d1e0e4ecd04f4f1cc45deb4492f5f32276e24cc4cd828c2cf4a8d12a98d07 WatchSource:0}: Error finding container 751d1e0e4ecd04f4f1cc45deb4492f5f32276e24cc4cd828c2cf4a8d12a98d07: Status 404 returned error can't find the container with id 751d1e0e4ecd04f4f1cc45deb4492f5f32276e24cc4cd828c2cf4a8d12a98d07 Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.814033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config\") pod \"098ec293-0087-4407-96d2-f79512483a1a\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.814159 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4gg\" (UniqueName: \"kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg\") pod \"098ec293-0087-4407-96d2-f79512483a1a\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.814201 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc\") pod \"098ec293-0087-4407-96d2-f79512483a1a\" (UID: \"098ec293-0087-4407-96d2-f79512483a1a\") " Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.814540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config" (OuterVolumeSpecName: "config") pod "098ec293-0087-4407-96d2-f79512483a1a" (UID: "098ec293-0087-4407-96d2-f79512483a1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.814903 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "098ec293-0087-4407-96d2-f79512483a1a" (UID: "098ec293-0087-4407-96d2-f79512483a1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.818908 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg" (OuterVolumeSpecName: "kube-api-access-kt4gg") pod "098ec293-0087-4407-96d2-f79512483a1a" (UID: "098ec293-0087-4407-96d2-f79512483a1a"). InnerVolumeSpecName "kube-api-access-kt4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.887465 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.892699 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.895099 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.900153 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.900210 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.900207 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.901544 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dttrj" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.923647 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.923687 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4gg\" (UniqueName: \"kubernetes.io/projected/098ec293-0087-4407-96d2-f79512483a1a-kube-api-access-kt4gg\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:38 crc kubenswrapper[4768]: I1124 17:06:38.923701 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/098ec293-0087-4407-96d2-f79512483a1a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.024925 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-config\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.024996 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025023 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025154 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvcf\" (UniqueName: \"kubernetes.io/projected/841a709e-ced3-499f-b13e-d0e1ff90ad11-kube-api-access-5dvcf\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.025290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.126549 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128488 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128868 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvcf\" (UniqueName: \"kubernetes.io/projected/841a709e-ced3-499f-b13e-d0e1ff90ad11-kube-api-access-5dvcf\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.128982 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.129096 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-config\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.129594 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.129820 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.130191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-config\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.131930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/841a709e-ced3-499f-b13e-d0e1ff90ad11-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.133214 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.133939 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.147704 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvcf\" (UniqueName: \"kubernetes.io/projected/841a709e-ced3-499f-b13e-d0e1ff90ad11-kube-api-access-5dvcf\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.155206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/841a709e-ced3-499f-b13e-d0e1ff90ad11-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.173403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"841a709e-ced3-499f-b13e-d0e1ff90ad11\") " pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.215725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.387455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"866ab349-cb74-4f16-9927-87eb7f5af5b8","Type":"ContainerStarted","Data":"ad26194ec6f9c128d7f19d4a27ea14aba49f7b19acaf9ff40fc0cc30bd0c78bf"} Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.389613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zpbbq" event={"ID":"40425fc1-a61b-4da7-95a4-262b16a8020f","Type":"ContainerStarted","Data":"751d1e0e4ecd04f4f1cc45deb4492f5f32276e24cc4cd828c2cf4a8d12a98d07"} Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.392238 4768 generic.go:334] "Generic (PLEG): container finished" podID="80a9201c-6219-4268-bca9-b285b28d1c52" containerID="cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8" exitCode=0 Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.392370 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" event={"ID":"80a9201c-6219-4268-bca9-b285b28d1c52","Type":"ContainerDied","Data":"cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8"} Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.394312 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" event={"ID":"098ec293-0087-4407-96d2-f79512483a1a","Type":"ContainerDied","Data":"71ded9405ca10ac73fca067c60dfa2a48fe8884abd29e2c8f89e5bfd3f433bac"} Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.394374 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v7ntw" Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.396528 4768 generic.go:334] "Generic (PLEG): container finished" podID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerID="b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917" exitCode=0 Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.396589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" event={"ID":"adec42ae-7642-46b5-abc6-492f3ceb1c14","Type":"ContainerDied","Data":"b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917"} Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.486425 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.492438 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v7ntw"] Nov 24 17:06:39 crc kubenswrapper[4768]: I1124 17:06:39.598048 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="098ec293-0087-4407-96d2-f79512483a1a" path="/var/lib/kubelet/pods/098ec293-0087-4407-96d2-f79512483a1a/volumes" Nov 24 17:06:45 crc kubenswrapper[4768]: I1124 17:06:45.284095 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 17:06:45 crc kubenswrapper[4768]: I1124 17:06:45.456254 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" event={"ID":"80a9201c-6219-4268-bca9-b285b28d1c52","Type":"ContainerStarted","Data":"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9"} Nov 24 17:06:45 crc kubenswrapper[4768]: I1124 17:06:45.456450 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:45 crc kubenswrapper[4768]: I1124 17:06:45.457505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"841a709e-ced3-499f-b13e-d0e1ff90ad11","Type":"ContainerStarted","Data":"d0a3c93955e3289229d1f43c6770675c8caa5b9aa823fe3348e59f7f4fc578c0"} Nov 24 17:06:45 crc kubenswrapper[4768]: I1124 17:06:45.476237 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" podStartSLOduration=18.737223101 podStartE2EDuration="19.476216569s" podCreationTimestamp="2025-11-24 17:06:26 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.308275212 +0000 UTC m=+878.555243870" lastFinishedPulling="2025-11-24 17:06:38.04726868 +0000 UTC m=+879.294237338" observedRunningTime="2025-11-24 17:06:45.472339269 +0000 UTC m=+886.719307927" watchObservedRunningTime="2025-11-24 17:06:45.476216569 +0000 UTC m=+886.723185237" Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.465767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bfe18146-b6db-422b-965f-8b22d4943e4f","Type":"ContainerStarted","Data":"f43b89917f3d011a00d740dc913803c5dbf6335f2184ffa846b60254df3e727b"} Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.467462 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5fda78c-6764-4dfb-837a-b9e48ff5bea8","Type":"ContainerStarted","Data":"10fbeb071a6a7741e9f656c78330e1561362757b4221c65031b192ed8cde1e9d"} Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.469041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"866ab349-cb74-4f16-9927-87eb7f5af5b8","Type":"ContainerStarted","Data":"b1dc7f04820c054aad088905b8e3e3062769cd9d95fe57725b98a4a20c3388ac"} Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.469177 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.470829 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d0c08ff-07c5-42d9-bbd4-77169f98868a","Type":"ContainerStarted","Data":"f08de07121f2fc0b0ea0ed15bd3151ac9a7849007c7be22c23a6596fb22d5aa4"} Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.482392 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.541705472 podStartE2EDuration="16.482369549s" podCreationTimestamp="2025-11-24 17:06:30 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.839853327 +0000 UTC m=+879.086821985" lastFinishedPulling="2025-11-24 17:06:44.780517404 +0000 UTC m=+886.027486062" observedRunningTime="2025-11-24 17:06:46.481783002 +0000 UTC m=+887.728751660" watchObservedRunningTime="2025-11-24 17:06:46.482369549 +0000 UTC m=+887.729338207" Nov 24 17:06:46 crc kubenswrapper[4768]: I1124 17:06:46.544064 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.03889218 podStartE2EDuration="14.544041002s" podCreationTimestamp="2025-11-24 17:06:32 +0000 UTC" firstStartedPulling="2025-11-24 17:06:38.361100801 +0000 UTC m=+879.608069459" lastFinishedPulling="2025-11-24 17:06:45.866249633 +0000 UTC m=+887.113218281" observedRunningTime="2025-11-24 17:06:46.53793945 +0000 UTC m=+887.784908108" watchObservedRunningTime="2025-11-24 17:06:46.544041002 +0000 UTC m=+887.791009670" Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.481567 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerStarted","Data":"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.484757 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"841a709e-ced3-499f-b13e-d0e1ff90ad11","Type":"ContainerStarted","Data":"1041bb7987677a593c44c3d12581ffc07fa25653895dfe378edd246087780948"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.486458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8j94t" event={"ID":"df42583a-33cf-4b89-9f69-7f3baeb6e7b5","Type":"ContainerStarted","Data":"3bb386003fb6fd34e1ee5dd1c298164e8e7965225867c51b3fb709befc429abd"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.486549 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8j94t" Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.496211 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerStarted","Data":"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.500028 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" event={"ID":"adec42ae-7642-46b5-abc6-492f3ceb1c14","Type":"ContainerStarted","Data":"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.500473 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.511420 4768 generic.go:334] "Generic (PLEG): container finished" podID="40425fc1-a61b-4da7-95a4-262b16a8020f" containerID="4ab182f1dee2b7634d2a8e2e7ea22e9b353860748f1be692c3725bb3237ab2ab" exitCode=0 Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.511528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zpbbq" event={"ID":"40425fc1-a61b-4da7-95a4-262b16a8020f","Type":"ContainerDied","Data":"4ab182f1dee2b7634d2a8e2e7ea22e9b353860748f1be692c3725bb3237ab2ab"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.518340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4ae8da1-9449-46bf-8e88-fc42708e6c53","Type":"ContainerStarted","Data":"408e46c94785a6a9e0496a92e7d96edafc2e2cf24dcd6177a253dc62dcd6766b"} Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.519360 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.578994 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" podStartSLOduration=20.92323722 podStartE2EDuration="21.578975486s" podCreationTimestamp="2025-11-24 17:06:26 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.834591348 +0000 UTC m=+879.081559996" lastFinishedPulling="2025-11-24 17:06:38.490329604 +0000 UTC m=+879.737298262" observedRunningTime="2025-11-24 17:06:47.557025426 +0000 UTC m=+888.803994084" watchObservedRunningTime="2025-11-24 17:06:47.578975486 +0000 UTC m=+888.825944144" Nov 24 17:06:47 crc kubenswrapper[4768]: I1124 17:06:47.581801 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8j94t" podStartSLOduration=5.77343963 podStartE2EDuration="12.581787546s" podCreationTimestamp="2025-11-24 17:06:35 +0000 UTC" firstStartedPulling="2025-11-24 17:06:38.11024028 +0000 UTC m=+879.357208938" lastFinishedPulling="2025-11-24 17:06:44.918588196 +0000 UTC m=+886.165556854" observedRunningTime="2025-11-24 17:06:47.572926365 +0000 UTC m=+888.819895053" watchObservedRunningTime="2025-11-24 17:06:47.581787546 +0000 UTC m=+888.828756204" Nov 24 17:06:48 crc kubenswrapper[4768]: I1124 17:06:48.531948 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zpbbq" event={"ID":"40425fc1-a61b-4da7-95a4-262b16a8020f","Type":"ContainerStarted","Data":"c020cbb406937177dfa2f7121a6e38702635150c3c07dc845067e1522006f4a9"} Nov 24 17:06:48 crc kubenswrapper[4768]: I1124 17:06:48.532260 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zpbbq" event={"ID":"40425fc1-a61b-4da7-95a4-262b16a8020f","Type":"ContainerStarted","Data":"8df6f3dec9a722b3b3531376537428c4d5723c032a372694620c4bdd2d1d1e88"} Nov 24 17:06:48 crc kubenswrapper[4768]: I1124 17:06:48.533242 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:48 crc kubenswrapper[4768]: I1124 17:06:48.533267 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:06:48 crc kubenswrapper[4768]: I1124 17:06:48.556781 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zpbbq" podStartSLOduration=6.501956107 podStartE2EDuration="12.556760944s" podCreationTimestamp="2025-11-24 17:06:36 +0000 UTC" firstStartedPulling="2025-11-24 17:06:38.788473591 +0000 UTC m=+880.035442249" lastFinishedPulling="2025-11-24 17:06:44.843278428 +0000 UTC m=+886.090247086" observedRunningTime="2025-11-24 17:06:48.550208029 +0000 UTC m=+889.797176687" watchObservedRunningTime="2025-11-24 17:06:48.556760944 +0000 UTC m=+889.803729602" Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.569883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4ae8da1-9449-46bf-8e88-fc42708e6c53","Type":"ContainerStarted","Data":"d1052d5f82abc00869ab59c7f55dfdc7a657756f0956e0f1d9ab08b133d98520"} Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.572656 4768 generic.go:334] "Generic (PLEG): container finished" podID="0d0c08ff-07c5-42d9-bbd4-77169f98868a" containerID="f08de07121f2fc0b0ea0ed15bd3151ac9a7849007c7be22c23a6596fb22d5aa4" exitCode=0 Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.572758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d0c08ff-07c5-42d9-bbd4-77169f98868a","Type":"ContainerDied","Data":"f08de07121f2fc0b0ea0ed15bd3151ac9a7849007c7be22c23a6596fb22d5aa4"} Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.576513 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"841a709e-ced3-499f-b13e-d0e1ff90ad11","Type":"ContainerStarted","Data":"f2b01f93dd2cd43711a167114398d620e1f3d38b1d5c742ac485c4c82b9d7fd9"} Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.581043 4768 generic.go:334] "Generic (PLEG): container finished" podID="f5fda78c-6764-4dfb-837a-b9e48ff5bea8" containerID="10fbeb071a6a7741e9f656c78330e1561362757b4221c65031b192ed8cde1e9d" exitCode=0 Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.581120 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5fda78c-6764-4dfb-837a-b9e48ff5bea8","Type":"ContainerDied","Data":"10fbeb071a6a7741e9f656c78330e1561362757b4221c65031b192ed8cde1e9d"} Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.602521 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.325152012 podStartE2EDuration="15.6025027s" podCreationTimestamp="2025-11-24 17:06:35 +0000 UTC" firstStartedPulling="2025-11-24 17:06:38.214654902 +0000 UTC m=+879.461623560" lastFinishedPulling="2025-11-24 17:06:49.49200559 +0000 UTC m=+890.738974248" observedRunningTime="2025-11-24 17:06:50.595337937 +0000 UTC m=+891.842306595" watchObservedRunningTime="2025-11-24 17:06:50.6025027 +0000 UTC m=+891.849471378" Nov 24 17:06:50 crc kubenswrapper[4768]: I1124 17:06:50.638801 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.557496612 podStartE2EDuration="13.638780845s" podCreationTimestamp="2025-11-24 17:06:37 +0000 UTC" firstStartedPulling="2025-11-24 17:06:45.397905975 +0000 UTC m=+886.644874633" lastFinishedPulling="2025-11-24 17:06:49.479190208 +0000 UTC m=+890.726158866" observedRunningTime="2025-11-24 17:06:50.636957884 +0000 UTC m=+891.883926542" watchObservedRunningTime="2025-11-24 17:06:50.638780845 +0000 UTC m=+891.885749503" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.216009 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.253114 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.422379 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.596724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5fda78c-6764-4dfb-837a-b9e48ff5bea8","Type":"ContainerStarted","Data":"16c031fe2548e7889933c00cc00d5ba5d8d239af66caa6caea6401fcdfdf5e87"} Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.599779 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d0c08ff-07c5-42d9-bbd4-77169f98868a","Type":"ContainerStarted","Data":"3647b4b7cade86a526d90a8f2410ebf9fcaab1fbd2a339697a5379a380dc5737"} Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.599991 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.621101 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=15.78886999 podStartE2EDuration="22.621082291s" podCreationTimestamp="2025-11-24 17:06:29 +0000 UTC" firstStartedPulling="2025-11-24 17:06:38.110265981 +0000 UTC m=+879.357234639" lastFinishedPulling="2025-11-24 17:06:44.942478282 +0000 UTC m=+886.189446940" observedRunningTime="2025-11-24 17:06:51.61608276 +0000 UTC m=+892.863051438" watchObservedRunningTime="2025-11-24 17:06:51.621082291 +0000 UTC m=+892.868050949" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.636336 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.76265655 podStartE2EDuration="24.636298602s" podCreationTimestamp="2025-11-24 17:06:27 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.969617045 +0000 UTC m=+879.216585693" lastFinishedPulling="2025-11-24 17:06:44.843259087 +0000 UTC m=+886.090227745" observedRunningTime="2025-11-24 17:06:51.630651332 +0000 UTC m=+892.877619990" watchObservedRunningTime="2025-11-24 17:06:51.636298602 +0000 UTC m=+892.883267260" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.788619 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.838054 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:51 crc kubenswrapper[4768]: I1124 17:06:51.838306 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="dnsmasq-dns" containerID="cri-o://4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de" gracePeriod=10 Nov 24 17:06:51 crc kubenswrapper[4768]: E1124 17:06:51.976626 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadec42ae_7642_46b5_abc6_492f3ceb1c14.slice/crio-4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de.scope\": RecentStats: unable to find data in memory cache]" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.244120 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.244448 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.283699 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.348403 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.472790 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc\") pod \"adec42ae-7642-46b5-abc6-492f3ceb1c14\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.472920 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdb24\" (UniqueName: \"kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24\") pod \"adec42ae-7642-46b5-abc6-492f3ceb1c14\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.473011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config\") pod \"adec42ae-7642-46b5-abc6-492f3ceb1c14\" (UID: \"adec42ae-7642-46b5-abc6-492f3ceb1c14\") " Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.486572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24" (OuterVolumeSpecName: "kube-api-access-fdb24") pod "adec42ae-7642-46b5-abc6-492f3ceb1c14" (UID: "adec42ae-7642-46b5-abc6-492f3ceb1c14"). InnerVolumeSpecName "kube-api-access-fdb24". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.512966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adec42ae-7642-46b5-abc6-492f3ceb1c14" (UID: "adec42ae-7642-46b5-abc6-492f3ceb1c14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.514569 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config" (OuterVolumeSpecName: "config") pod "adec42ae-7642-46b5-abc6-492f3ceb1c14" (UID: "adec42ae-7642-46b5-abc6-492f3ceb1c14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.575472 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.575514 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adec42ae-7642-46b5-abc6-492f3ceb1c14-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.575526 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdb24\" (UniqueName: \"kubernetes.io/projected/adec42ae-7642-46b5-abc6-492f3ceb1c14-kube-api-access-fdb24\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.608892 4768 generic.go:334] "Generic (PLEG): container finished" podID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerID="4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de" exitCode=0 Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.608991 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.609036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" event={"ID":"adec42ae-7642-46b5-abc6-492f3ceb1c14","Type":"ContainerDied","Data":"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de"} Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.609069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zwf2f" event={"ID":"adec42ae-7642-46b5-abc6-492f3ceb1c14","Type":"ContainerDied","Data":"4da537de70f0cc650297e8850ee74ab1581df7a9eb58160b2442373f7ce33b12"} Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.609084 4768 scope.go:117] "RemoveContainer" containerID="4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.645736 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.650412 4768 scope.go:117] "RemoveContainer" containerID="b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.650663 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zwf2f"] Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.655135 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.667983 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.672640 4768 scope.go:117] "RemoveContainer" containerID="4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.672855 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 17:06:52 crc kubenswrapper[4768]: E1124 17:06:52.673080 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de\": container with ID starting with 4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de not found: ID does not exist" containerID="4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.673137 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de"} err="failed to get container status \"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de\": rpc error: code = NotFound desc = could not find container \"4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de\": container with ID starting with 4be40df2bdab8c465eae90a96dace6d10f6eea84a88507b0d0da65b2a5e1e8de not found: ID does not exist" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.673178 4768 scope.go:117] "RemoveContainer" containerID="b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917" Nov 24 17:06:52 crc kubenswrapper[4768]: E1124 17:06:52.673550 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917\": container with ID starting with b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917 not found: ID does not exist" containerID="b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.673605 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917"} err="failed to get container status \"b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917\": rpc error: code = NotFound desc = could not find container \"b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917\": container with ID starting with b19cfe77cb583272fb3cd887a502e3ce0d9cbc6a3af01fa815731aab49496917 not found: ID does not exist" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.887885 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:52 crc kubenswrapper[4768]: E1124 17:06:52.888590 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="dnsmasq-dns" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.888607 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="dnsmasq-dns" Nov 24 17:06:52 crc kubenswrapper[4768]: E1124 17:06:52.888619 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="init" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.888626 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="init" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.888813 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" containerName="dnsmasq-dns" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.889798 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.892722 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.904719 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.990428 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.990514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.990562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:52 crc kubenswrapper[4768]: I1124 17:06:52.990594 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgxrf\" (UniqueName: \"kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.092228 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.092887 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.093079 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.093220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgxrf\" (UniqueName: \"kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.093569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.093750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.093954 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.113105 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4crs9"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.114427 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.121409 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgxrf\" (UniqueName: \"kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf\") pod \"dnsmasq-dns-7fd796d7df-fksd8\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.123718 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4crs9"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.124984 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.194768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.194850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d668018c-aa61-4c17-9af6-f00933b4160c-config\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.195009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fqbf\" (UniqueName: \"kubernetes.io/projected/d668018c-aa61-4c17-9af6-f00933b4160c-kube-api-access-6fqbf\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.195059 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovn-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.195080 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-combined-ca-bundle\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.195161 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovs-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.216042 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.231317 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.232562 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.238216 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.238423 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-j2lb7" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.238491 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.238665 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.243866 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.280572 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.297290 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.298642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovn-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.298695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-combined-ca-bundle\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.298764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.299063 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovs-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.299140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovn-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.298768 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d668018c-aa61-4c17-9af6-f00933b4160c-ovs-rundir\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.299470 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.299599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d668018c-aa61-4c17-9af6-f00933b4160c-config\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.299681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fqbf\" (UniqueName: \"kubernetes.io/projected/d668018c-aa61-4c17-9af6-f00933b4160c-kube-api-access-6fqbf\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.300938 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d668018c-aa61-4c17-9af6-f00933b4160c-config\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.304975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-combined-ca-bundle\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.311023 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d668018c-aa61-4c17-9af6-f00933b4160c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.314029 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.333753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fqbf\" (UniqueName: \"kubernetes.io/projected/d668018c-aa61-4c17-9af6-f00933b4160c-kube-api-access-6fqbf\") pod \"ovn-controller-metrics-4crs9\" (UID: \"d668018c-aa61-4c17-9af6-f00933b4160c\") " pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.339146 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88k6\" (UniqueName: \"kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-config\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401648 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401769 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-scripts\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401823 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.401849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.402088 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.402165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76gq\" (UniqueName: \"kubernetes.io/projected/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-kube-api-access-j76gq\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.402305 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.402330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.485986 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4crs9" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503633 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76gq\" (UniqueName: \"kubernetes.io/projected/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-kube-api-access-j76gq\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88k6\" (UniqueName: \"kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-config\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-scripts\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503942 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.503977 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.504587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.505406 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.507268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-config\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.508462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.508679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.508947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-scripts\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.511998 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.512660 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.516175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.516370 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.533982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88k6\" (UniqueName: \"kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6\") pod \"dnsmasq-dns-86db49b7ff-5wpmn\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.535291 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76gq\" (UniqueName: \"kubernetes.io/projected/84f66fa0-19d0-40f2-a4d0-4ddc58101d00-kube-api-access-j76gq\") pod \"ovn-northd-0\" (UID: \"84f66fa0-19d0-40f2-a4d0-4ddc58101d00\") " pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.596681 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adec42ae-7642-46b5-abc6-492f3ceb1c14" path="/var/lib/kubelet/pods/adec42ae-7642-46b5-abc6-492f3ceb1c14/volumes" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.608248 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.644747 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.805794 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:53 crc kubenswrapper[4768]: I1124 17:06:53.929525 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4crs9"] Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.085405 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 17:06:54 crc kubenswrapper[4768]: W1124 17:06:54.089942 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f66fa0_19d0_40f2_a4d0_4ddc58101d00.slice/crio-f5d9564e8c0e41f9399485fc725362b7af3ab61e7a3f87aa6224d86a23a5832c WatchSource:0}: Error finding container f5d9564e8c0e41f9399485fc725362b7af3ab61e7a3f87aa6224d86a23a5832c: Status 404 returned error can't find the container with id f5d9564e8c0e41f9399485fc725362b7af3ab61e7a3f87aa6224d86a23a5832c Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.134626 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.637701 4768 generic.go:334] "Generic (PLEG): container finished" podID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerID="774c43d534804184d4d672b74869a8ff48776ee526ff6cca3f3ae82a8234235e" exitCode=0 Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.638030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" event={"ID":"23b2afc7-7876-4fa7-9610-96c2ac826ccd","Type":"ContainerDied","Data":"774c43d534804184d4d672b74869a8ff48776ee526ff6cca3f3ae82a8234235e"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.638121 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" event={"ID":"23b2afc7-7876-4fa7-9610-96c2ac826ccd","Type":"ContainerStarted","Data":"6f792a12e6af34b18e4728b851a6473d5a7c78cc44a128bb6e71efd77a315b8e"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.640648 4768 generic.go:334] "Generic (PLEG): container finished" podID="c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" containerID="2a4d4343211a43dc050eb9d227e06cd2acb73d747ad2ba1e62fd13479aa0edfd" exitCode=0 Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.640775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" event={"ID":"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f","Type":"ContainerDied","Data":"2a4d4343211a43dc050eb9d227e06cd2acb73d747ad2ba1e62fd13479aa0edfd"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.640844 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" event={"ID":"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f","Type":"ContainerStarted","Data":"33ad40c407de07acf0ccf95b6ec96fc00c7b18d07e1a9c04cd1a4dd60d3b911a"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.646209 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4crs9" event={"ID":"d668018c-aa61-4c17-9af6-f00933b4160c","Type":"ContainerStarted","Data":"5907ef1a293189be3186614940969a9496ea7726f47c123fbb7f59a8a97bb2b1"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.646274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4crs9" event={"ID":"d668018c-aa61-4c17-9af6-f00933b4160c","Type":"ContainerStarted","Data":"fdac881c8beda5534a13fe676400c8be271c11ad7a3c123fb7090039798d58f9"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.647900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"84f66fa0-19d0-40f2-a4d0-4ddc58101d00","Type":"ContainerStarted","Data":"f5d9564e8c0e41f9399485fc725362b7af3ab61e7a3f87aa6224d86a23a5832c"} Nov 24 17:06:54 crc kubenswrapper[4768]: I1124 17:06:54.698879 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4crs9" podStartSLOduration=1.6988509280000001 podStartE2EDuration="1.698850928s" podCreationTimestamp="2025-11-24 17:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:06:54.676959599 +0000 UTC m=+895.923928287" watchObservedRunningTime="2025-11-24 17:06:54.698850928 +0000 UTC m=+895.945819606" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.043032 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.153114 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config\") pod \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.153618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb\") pod \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.153747 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc\") pod \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.153787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgxrf\" (UniqueName: \"kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf\") pod \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\" (UID: \"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f\") " Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.169670 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf" (OuterVolumeSpecName: "kube-api-access-hgxrf") pod "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" (UID: "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f"). InnerVolumeSpecName "kube-api-access-hgxrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.180282 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" (UID: "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.185027 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" (UID: "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.186999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config" (OuterVolumeSpecName: "config") pod "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" (UID: "c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.256292 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.256323 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.256333 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.256359 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgxrf\" (UniqueName: \"kubernetes.io/projected/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f-kube-api-access-hgxrf\") on node \"crc\" DevicePath \"\"" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.658166 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" event={"ID":"23b2afc7-7876-4fa7-9610-96c2ac826ccd","Type":"ContainerStarted","Data":"f32c818c02706241858e72e171ecee31be63c980ba2362d2f28dcf72e910c9a3"} Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.658407 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.664130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" event={"ID":"c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f","Type":"ContainerDied","Data":"33ad40c407de07acf0ccf95b6ec96fc00c7b18d07e1a9c04cd1a4dd60d3b911a"} Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.664195 4768 scope.go:117] "RemoveContainer" containerID="2a4d4343211a43dc050eb9d227e06cd2acb73d747ad2ba1e62fd13479aa0edfd" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.664192 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-fksd8" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.711520 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" podStartSLOduration=2.711495771 podStartE2EDuration="2.711495771s" podCreationTimestamp="2025-11-24 17:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:06:55.687193334 +0000 UTC m=+896.934162012" watchObservedRunningTime="2025-11-24 17:06:55.711495771 +0000 UTC m=+896.958464419" Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.750904 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.757097 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-fksd8"] Nov 24 17:06:55 crc kubenswrapper[4768]: I1124 17:06:55.780148 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 17:06:57 crc kubenswrapper[4768]: I1124 17:06:57.595199 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" path="/var/lib/kubelet/pods/c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f/volumes" Nov 24 17:06:58 crc kubenswrapper[4768]: I1124 17:06:58.688632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"84f66fa0-19d0-40f2-a4d0-4ddc58101d00","Type":"ContainerStarted","Data":"26c786fc0dd5984f7c568bc9697cd2d40ebf30cd40daeeac7f6b5cba8dc6bcd6"} Nov 24 17:06:58 crc kubenswrapper[4768]: I1124 17:06:58.689114 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"84f66fa0-19d0-40f2-a4d0-4ddc58101d00","Type":"ContainerStarted","Data":"269b56d103e4e1a451307e60418afb09c7f06ead3e3c0aa35cf4880bafe39240"} Nov 24 17:06:58 crc kubenswrapper[4768]: I1124 17:06:58.690433 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 17:06:58 crc kubenswrapper[4768]: I1124 17:06:58.713041 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.262856781 podStartE2EDuration="5.713020804s" podCreationTimestamp="2025-11-24 17:06:53 +0000 UTC" firstStartedPulling="2025-11-24 17:06:54.094511876 +0000 UTC m=+895.341480534" lastFinishedPulling="2025-11-24 17:06:57.544675899 +0000 UTC m=+898.791644557" observedRunningTime="2025-11-24 17:06:58.707899369 +0000 UTC m=+899.954868037" watchObservedRunningTime="2025-11-24 17:06:58.713020804 +0000 UTC m=+899.959989462" Nov 24 17:06:59 crc kubenswrapper[4768]: I1124 17:06:59.350165 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 17:06:59 crc kubenswrapper[4768]: I1124 17:06:59.350753 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 17:06:59 crc kubenswrapper[4768]: I1124 17:06:59.433910 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 17:06:59 crc kubenswrapper[4768]: I1124 17:06:59.782276 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.367029 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d93f-account-create-9nz7l"] Nov 24 17:07:00 crc kubenswrapper[4768]: E1124 17:07:00.367800 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" containerName="init" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.367814 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" containerName="init" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.367987 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9dec8e6-cf6a-4e3c-901a-2d6ed4889c3f" containerName="init" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.368609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.372225 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.374220 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d93f-account-create-9nz7l"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.433974 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8c2vc"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.435308 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.441357 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8c2vc"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.450848 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrvl\" (UniqueName: \"kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.450924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.552090 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.552276 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.552381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxr4\" (UniqueName: \"kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.552465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrvl\" (UniqueName: \"kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.553627 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.576065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrvl\" (UniqueName: \"kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl\") pod \"keystone-d93f-account-create-9nz7l\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.655253 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.655606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxr4\" (UniqueName: \"kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.656142 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.675993 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-8hn9l"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.677008 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxr4\" (UniqueName: \"kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4\") pod \"keystone-db-create-8c2vc\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.681120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.683726 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8hn9l"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.691774 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.751469 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.751524 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.751984 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.759882 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j5w6\" (UniqueName: \"kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.760293 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.779366 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a945-account-create-lxxbd"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.780568 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.782832 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.786797 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a945-account-create-lxxbd"] Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.862634 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j5w6\" (UniqueName: \"kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.863013 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.863037 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhhzw\" (UniqueName: \"kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.863082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.864081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.884058 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j5w6\" (UniqueName: \"kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6\") pod \"placement-db-create-8hn9l\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.913666 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.965410 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.965444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhhzw\" (UniqueName: \"kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.966097 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:00 crc kubenswrapper[4768]: I1124 17:07:00.985599 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhhzw\" (UniqueName: \"kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw\") pod \"placement-a945-account-create-lxxbd\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:01 crc kubenswrapper[4768]: W1124 17:07:01.135476 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod339680b8_1c63_400b_92a2_5b3dff0d90f3.slice/crio-ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8 WatchSource:0}: Error finding container ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8: Status 404 returned error can't find the container with id ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8 Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.138024 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8c2vc"] Nov 24 17:07:01 crc kubenswrapper[4768]: W1124 17:07:01.145360 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9e0002_d010_466b_9a99_c3a4ae2bf020.slice/crio-c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323 WatchSource:0}: Error finding container c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323: Status 404 returned error can't find the container with id c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323 Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.148197 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d93f-account-create-9nz7l"] Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.148756 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.168451 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.572952 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-8hn9l"] Nov 24 17:07:01 crc kubenswrapper[4768]: W1124 17:07:01.576426 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71c5b0d6_d045_486b_88fa_32652a7d875f.slice/crio-1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac WatchSource:0}: Error finding container 1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac: Status 404 returned error can't find the container with id 1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.663277 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a945-account-create-lxxbd"] Nov 24 17:07:01 crc kubenswrapper[4768]: W1124 17:07:01.667167 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3089d03_ccf1_4aff_ac88_45fefc76ec67.slice/crio-d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e WatchSource:0}: Error finding container d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e: Status 404 returned error can't find the container with id d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.733810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8hn9l" event={"ID":"71c5b0d6-d045-486b-88fa-32652a7d875f","Type":"ContainerStarted","Data":"1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac"} Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.735847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d93f-account-create-9nz7l" event={"ID":"8f9e0002-d010-466b-9a99-c3a4ae2bf020","Type":"ContainerStarted","Data":"c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323"} Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.737248 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a945-account-create-lxxbd" event={"ID":"d3089d03-ccf1-4aff-ac88-45fefc76ec67","Type":"ContainerStarted","Data":"d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e"} Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.738795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8c2vc" event={"ID":"339680b8-1c63-400b-92a2-5b3dff0d90f3","Type":"ContainerStarted","Data":"ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8"} Nov 24 17:07:01 crc kubenswrapper[4768]: I1124 17:07:01.815414 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.729284 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.729900 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="dnsmasq-dns" containerID="cri-o://f32c818c02706241858e72e171ecee31be63c980ba2362d2f28dcf72e910c9a3" gracePeriod=10 Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.733004 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.764323 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.765781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.784484 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.901264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmkr\" (UniqueName: \"kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.901478 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.901520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.901694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:02 crc kubenswrapper[4768]: I1124 17:07:02.901719 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.003080 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.003681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.003821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.003842 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.003988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmkr\" (UniqueName: \"kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.004726 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.004763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.004847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.004893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.023469 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmkr\" (UniqueName: \"kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr\") pod \"dnsmasq-dns-698758b865-5dmkj\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.109007 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.374853 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.646181 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.761842 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5dmkj" event={"ID":"7771e669-ecdf-44a6-9e16-409eade01b8a","Type":"ContainerStarted","Data":"28ea4b1f1d8811d338cd9f8aecaabdfb1f4aa28476ab68a0b646b9abd33f42f3"} Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.816745 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.822359 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.825017 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.827602 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.827602 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.828601 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8kh84" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.838692 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.921181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzsc\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-kube-api-access-hzzsc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.921246 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-lock\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.921377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.921411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:03 crc kubenswrapper[4768]: I1124 17:07:03.921453 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-cache\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.012288 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ks8ts"] Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.013530 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.015540 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.016367 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.020199 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.022883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzsc\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-kube-api-access-hzzsc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.022946 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-lock\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.023036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.023077 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.023125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-cache\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.023623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-cache\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.023746 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.023767 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.023812 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift podName:1b76679b-41cc-4ddf-898b-5a05b5cfa052 nodeName:}" failed. No retries permitted until 2025-11-24 17:07:04.523794189 +0000 UTC m=+905.770762847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift") pod "swift-storage-0" (UID: "1b76679b-41cc-4ddf-898b-5a05b5cfa052") : configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.023989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1b76679b-41cc-4ddf-898b-5a05b5cfa052-lock\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.024300 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.030039 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ks8ts"] Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.052968 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzsc\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-kube-api-access-hzzsc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.065402 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124635 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124919 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.124989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph86c\" (UniqueName: \"kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.125107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226485 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226543 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph86c\" (UniqueName: \"kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.226640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.227254 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.227521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.227774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.230940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.233930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.234068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.247989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph86c\" (UniqueName: \"kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c\") pod \"swift-ring-rebalance-ks8ts\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.332542 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.531513 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.531997 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.532130 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: E1124 17:07:04.532205 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift podName:1b76679b-41cc-4ddf-898b-5a05b5cfa052 nodeName:}" failed. No retries permitted until 2025-11-24 17:07:05.532177219 +0000 UTC m=+906.779145877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift") pod "swift-storage-0" (UID: "1b76679b-41cc-4ddf-898b-5a05b5cfa052") : configmap "swift-ring-files" not found Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.782470 4768 generic.go:334] "Generic (PLEG): container finished" podID="8f9e0002-d010-466b-9a99-c3a4ae2bf020" containerID="93cf4dc7f399e68d6a09411dd327b3fee00a4c41065a0ac8c373a849a646dd88" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.782559 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d93f-account-create-9nz7l" event={"ID":"8f9e0002-d010-466b-9a99-c3a4ae2bf020","Type":"ContainerDied","Data":"93cf4dc7f399e68d6a09411dd327b3fee00a4c41065a0ac8c373a849a646dd88"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.785111 4768 generic.go:334] "Generic (PLEG): container finished" podID="d3089d03-ccf1-4aff-ac88-45fefc76ec67" containerID="971501cff335143c82645323507905d65f8733e81a258b8d9091fed616820cc4" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.785229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a945-account-create-lxxbd" event={"ID":"d3089d03-ccf1-4aff-ac88-45fefc76ec67","Type":"ContainerDied","Data":"971501cff335143c82645323507905d65f8733e81a258b8d9091fed616820cc4"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.794115 4768 generic.go:334] "Generic (PLEG): container finished" podID="339680b8-1c63-400b-92a2-5b3dff0d90f3" containerID="6329f0262c24c4c4c2efc9adb7cc7abf77e9bada8184d4c76e2b12411c734bc6" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.794435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8c2vc" event={"ID":"339680b8-1c63-400b-92a2-5b3dff0d90f3","Type":"ContainerDied","Data":"6329f0262c24c4c4c2efc9adb7cc7abf77e9bada8184d4c76e2b12411c734bc6"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.807061 4768 generic.go:334] "Generic (PLEG): container finished" podID="71c5b0d6-d045-486b-88fa-32652a7d875f" containerID="c4f1b99ce5c335a3a2764b05fd6fb617c1e1c6f0a14cf5c5f7f77b60c8f75a2b" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.807139 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8hn9l" event={"ID":"71c5b0d6-d045-486b-88fa-32652a7d875f","Type":"ContainerDied","Data":"c4f1b99ce5c335a3a2764b05fd6fb617c1e1c6f0a14cf5c5f7f77b60c8f75a2b"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.809636 4768 generic.go:334] "Generic (PLEG): container finished" podID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerID="fc5901e3b749a938c7ccf094a5707b58d7b8fb799061d02f5f92b96cf45110b5" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.809819 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5dmkj" event={"ID":"7771e669-ecdf-44a6-9e16-409eade01b8a","Type":"ContainerDied","Data":"fc5901e3b749a938c7ccf094a5707b58d7b8fb799061d02f5f92b96cf45110b5"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.824560 4768 generic.go:334] "Generic (PLEG): container finished" podID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerID="f32c818c02706241858e72e171ecee31be63c980ba2362d2f28dcf72e910c9a3" exitCode=0 Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.824859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" event={"ID":"23b2afc7-7876-4fa7-9610-96c2ac826ccd","Type":"ContainerDied","Data":"f32c818c02706241858e72e171ecee31be63c980ba2362d2f28dcf72e910c9a3"} Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.855470 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ks8ts"] Nov 24 17:07:04 crc kubenswrapper[4768]: I1124 17:07:04.999407 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.144956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb\") pod \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.145043 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88k6\" (UniqueName: \"kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6\") pod \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.145184 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb\") pod \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.145391 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc\") pod \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.145429 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config\") pod \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\" (UID: \"23b2afc7-7876-4fa7-9610-96c2ac826ccd\") " Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.148615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6" (OuterVolumeSpecName: "kube-api-access-c88k6") pod "23b2afc7-7876-4fa7-9610-96c2ac826ccd" (UID: "23b2afc7-7876-4fa7-9610-96c2ac826ccd"). InnerVolumeSpecName "kube-api-access-c88k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.190307 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config" (OuterVolumeSpecName: "config") pod "23b2afc7-7876-4fa7-9610-96c2ac826ccd" (UID: "23b2afc7-7876-4fa7-9610-96c2ac826ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.190767 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23b2afc7-7876-4fa7-9610-96c2ac826ccd" (UID: "23b2afc7-7876-4fa7-9610-96c2ac826ccd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.191198 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23b2afc7-7876-4fa7-9610-96c2ac826ccd" (UID: "23b2afc7-7876-4fa7-9610-96c2ac826ccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.191930 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23b2afc7-7876-4fa7-9610-96c2ac826ccd" (UID: "23b2afc7-7876-4fa7-9610-96c2ac826ccd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.247387 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.247426 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.247438 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.247451 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b2afc7-7876-4fa7-9610-96c2ac826ccd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.247464 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88k6\" (UniqueName: \"kubernetes.io/projected/23b2afc7-7876-4fa7-9610-96c2ac826ccd-kube-api-access-c88k6\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.554936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:05 crc kubenswrapper[4768]: E1124 17:07:05.555095 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 17:07:05 crc kubenswrapper[4768]: E1124 17:07:05.555111 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 17:07:05 crc kubenswrapper[4768]: E1124 17:07:05.555165 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift podName:1b76679b-41cc-4ddf-898b-5a05b5cfa052 nodeName:}" failed. No retries permitted until 2025-11-24 17:07:07.555150625 +0000 UTC m=+908.802119273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift") pod "swift-storage-0" (UID: "1b76679b-41cc-4ddf-898b-5a05b5cfa052") : configmap "swift-ring-files" not found Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.835266 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" event={"ID":"23b2afc7-7876-4fa7-9610-96c2ac826ccd","Type":"ContainerDied","Data":"6f792a12e6af34b18e4728b851a6473d5a7c78cc44a128bb6e71efd77a315b8e"} Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.835310 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-5wpmn" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.835366 4768 scope.go:117] "RemoveContainer" containerID="f32c818c02706241858e72e171ecee31be63c980ba2362d2f28dcf72e910c9a3" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.837831 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ks8ts" event={"ID":"b46e54e9-1ffb-4094-a42a-0d7a86fff17c","Type":"ContainerStarted","Data":"9641a39613d934b3de85ff5dfd37159be4369e8cd0dee14d513c16b573f71487"} Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.840135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5dmkj" event={"ID":"7771e669-ecdf-44a6-9e16-409eade01b8a","Type":"ContainerStarted","Data":"8daa70df6296ba9edffd866aa7b2ba8a6f4f31b69a4bd2c9c3de4658c291a1f4"} Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.860702 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-5dmkj" podStartSLOduration=3.860678641 podStartE2EDuration="3.860678641s" podCreationTimestamp="2025-11-24 17:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:05.85566917 +0000 UTC m=+907.102637838" watchObservedRunningTime="2025-11-24 17:07:05.860678641 +0000 UTC m=+907.107647299" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.905287 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xbvzf"] Nov 24 17:07:05 crc kubenswrapper[4768]: E1124 17:07:05.905695 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="dnsmasq-dns" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.905714 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="dnsmasq-dns" Nov 24 17:07:05 crc kubenswrapper[4768]: E1124 17:07:05.905729 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="init" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.905736 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="init" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.905935 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" containerName="dnsmasq-dns" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.906559 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.921164 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xbvzf"] Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.929991 4768 scope.go:117] "RemoveContainer" containerID="774c43d534804184d4d672b74869a8ff48776ee526ff6cca3f3ae82a8234235e" Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.932617 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:07:05 crc kubenswrapper[4768]: I1124 17:07:05.943427 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-5wpmn"] Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:05.994731 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0131-account-create-h9pmh"] Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.000985 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.007229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.019938 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0131-account-create-h9pmh"] Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.064733 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.064789 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsxcp\" (UniqueName: \"kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.166906 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.166960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.167005 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9p4\" (UniqueName: \"kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.167047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsxcp\" (UniqueName: \"kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.167725 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.189878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsxcp\" (UniqueName: \"kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp\") pod \"glance-db-create-xbvzf\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.231096 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.269875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.269929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9p4\" (UniqueName: \"kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.271630 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.295424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9p4\" (UniqueName: \"kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4\") pod \"glance-0131-account-create-h9pmh\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.319686 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.355995 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.362240 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.401983 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.433792 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.507707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvrvl\" (UniqueName: \"kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl\") pod \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.507813 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhhzw\" (UniqueName: \"kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw\") pod \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.507860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts\") pod \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\" (UID: \"8f9e0002-d010-466b-9a99-c3a4ae2bf020\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.507954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts\") pod \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\" (UID: \"d3089d03-ccf1-4aff-ac88-45fefc76ec67\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.507997 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts\") pod \"71c5b0d6-d045-486b-88fa-32652a7d875f\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.508031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j5w6\" (UniqueName: \"kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6\") pod \"71c5b0d6-d045-486b-88fa-32652a7d875f\" (UID: \"71c5b0d6-d045-486b-88fa-32652a7d875f\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.519599 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71c5b0d6-d045-486b-88fa-32652a7d875f" (UID: "71c5b0d6-d045-486b-88fa-32652a7d875f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.520206 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f9e0002-d010-466b-9a99-c3a4ae2bf020" (UID: "8f9e0002-d010-466b-9a99-c3a4ae2bf020"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.520654 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3089d03-ccf1-4aff-ac88-45fefc76ec67" (UID: "d3089d03-ccf1-4aff-ac88-45fefc76ec67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.521596 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6" (OuterVolumeSpecName: "kube-api-access-7j5w6") pod "71c5b0d6-d045-486b-88fa-32652a7d875f" (UID: "71c5b0d6-d045-486b-88fa-32652a7d875f"). InnerVolumeSpecName "kube-api-access-7j5w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.523126 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl" (OuterVolumeSpecName: "kube-api-access-qvrvl") pod "8f9e0002-d010-466b-9a99-c3a4ae2bf020" (UID: "8f9e0002-d010-466b-9a99-c3a4ae2bf020"). InnerVolumeSpecName "kube-api-access-qvrvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.524307 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw" (OuterVolumeSpecName: "kube-api-access-qhhzw") pod "d3089d03-ccf1-4aff-ac88-45fefc76ec67" (UID: "d3089d03-ccf1-4aff-ac88-45fefc76ec67"). InnerVolumeSpecName "kube-api-access-qhhzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.609841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts\") pod \"339680b8-1c63-400b-92a2-5b3dff0d90f3\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.609954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdxr4\" (UniqueName: \"kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4\") pod \"339680b8-1c63-400b-92a2-5b3dff0d90f3\" (UID: \"339680b8-1c63-400b-92a2-5b3dff0d90f3\") " Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.610295 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "339680b8-1c63-400b-92a2-5b3dff0d90f3" (UID: "339680b8-1c63-400b-92a2-5b3dff0d90f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.610792 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f9e0002-d010-466b-9a99-c3a4ae2bf020-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611165 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3089d03-ccf1-4aff-ac88-45fefc76ec67-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611184 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71c5b0d6-d045-486b-88fa-32652a7d875f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611198 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j5w6\" (UniqueName: \"kubernetes.io/projected/71c5b0d6-d045-486b-88fa-32652a7d875f-kube-api-access-7j5w6\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611211 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvrvl\" (UniqueName: \"kubernetes.io/projected/8f9e0002-d010-466b-9a99-c3a4ae2bf020-kube-api-access-qvrvl\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611224 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/339680b8-1c63-400b-92a2-5b3dff0d90f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.611236 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhhzw\" (UniqueName: \"kubernetes.io/projected/d3089d03-ccf1-4aff-ac88-45fefc76ec67-kube-api-access-qhhzw\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.613640 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4" (OuterVolumeSpecName: "kube-api-access-mdxr4") pod "339680b8-1c63-400b-92a2-5b3dff0d90f3" (UID: "339680b8-1c63-400b-92a2-5b3dff0d90f3"). InnerVolumeSpecName "kube-api-access-mdxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.713404 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdxr4\" (UniqueName: \"kubernetes.io/projected/339680b8-1c63-400b-92a2-5b3dff0d90f3-kube-api-access-mdxr4\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.838619 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xbvzf"] Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.867683 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8c2vc" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.867708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8c2vc" event={"ID":"339680b8-1c63-400b-92a2-5b3dff0d90f3","Type":"ContainerDied","Data":"ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8"} Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.867764 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1086d134e2688f4b525681a2dfe031e7d64dd64630d7644a4736f85b5822c8" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.869944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-8hn9l" event={"ID":"71c5b0d6-d045-486b-88fa-32652a7d875f","Type":"ContainerDied","Data":"1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac"} Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.869970 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7df359b69374ce8b96ca4611fb674f1a950e07b6916d629b08e83c540356ac" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.870066 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-8hn9l" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.873716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d93f-account-create-9nz7l" event={"ID":"8f9e0002-d010-466b-9a99-c3a4ae2bf020","Type":"ContainerDied","Data":"c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323"} Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.873747 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87dd39f7fbe3f16413a689f80ca438aa9835e5ca40dafdbe113631ab3843323" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.873795 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d93f-account-create-9nz7l" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.878588 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a945-account-create-lxxbd" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.879125 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a945-account-create-lxxbd" event={"ID":"d3089d03-ccf1-4aff-ac88-45fefc76ec67","Type":"ContainerDied","Data":"d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e"} Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.879170 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d129495f0c8bd7fa1aab84b08c53a0bfb2503050fc3e58993f907e897c8b3b9e" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.879198 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:06 crc kubenswrapper[4768]: I1124 17:07:06.920119 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0131-account-create-h9pmh"] Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.592526 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b2afc7-7876-4fa7-9610-96c2ac826ccd" path="/var/lib/kubelet/pods/23b2afc7-7876-4fa7-9610-96c2ac826ccd/volumes" Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.639530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:07 crc kubenswrapper[4768]: E1124 17:07:07.639763 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 17:07:07 crc kubenswrapper[4768]: E1124 17:07:07.639789 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 17:07:07 crc kubenswrapper[4768]: E1124 17:07:07.639855 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift podName:1b76679b-41cc-4ddf-898b-5a05b5cfa052 nodeName:}" failed. No retries permitted until 2025-11-24 17:07:11.639833191 +0000 UTC m=+912.886801849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift") pod "swift-storage-0" (UID: "1b76679b-41cc-4ddf-898b-5a05b5cfa052") : configmap "swift-ring-files" not found Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.887187 4768 generic.go:334] "Generic (PLEG): container finished" podID="11e8ca38-334e-4dfd-a174-f02bfc8c69ec" containerID="0df386c41bf1fc6cd8893daf074b61b3fef5b493d54f749b4ba325c81012da42" exitCode=0 Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.887288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0131-account-create-h9pmh" event={"ID":"11e8ca38-334e-4dfd-a174-f02bfc8c69ec","Type":"ContainerDied","Data":"0df386c41bf1fc6cd8893daf074b61b3fef5b493d54f749b4ba325c81012da42"} Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.887337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0131-account-create-h9pmh" event={"ID":"11e8ca38-334e-4dfd-a174-f02bfc8c69ec","Type":"ContainerStarted","Data":"4bcef96bdd197f68e015e8353f804ad89288b2e80aba2ea709dcb8011c996c5c"} Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.888924 4768 generic.go:334] "Generic (PLEG): container finished" podID="34f06565-e23b-4e08-88d7-280cb402977a" containerID="292a685fa399858a0fdda12182f1d6d055e91cd8830ab63cb6b4290913e11862" exitCode=0 Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.889000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbvzf" event={"ID":"34f06565-e23b-4e08-88d7-280cb402977a","Type":"ContainerDied","Data":"292a685fa399858a0fdda12182f1d6d055e91cd8830ab63cb6b4290913e11862"} Nov 24 17:07:07 crc kubenswrapper[4768]: I1124 17:07:07.889049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbvzf" event={"ID":"34f06565-e23b-4e08-88d7-280cb402977a","Type":"ContainerStarted","Data":"2b011582f5e59789781ed40fe989106f2233c696ee7907c8a2b546a38e72fa03"} Nov 24 17:07:08 crc kubenswrapper[4768]: I1124 17:07:08.667195 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.263033 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.267555 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.273947 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9p4\" (UniqueName: \"kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4\") pod \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.274164 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts\") pod \"34f06565-e23b-4e08-88d7-280cb402977a\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.274293 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsxcp\" (UniqueName: \"kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp\") pod \"34f06565-e23b-4e08-88d7-280cb402977a\" (UID: \"34f06565-e23b-4e08-88d7-280cb402977a\") " Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.274396 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts\") pod \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\" (UID: \"11e8ca38-334e-4dfd-a174-f02bfc8c69ec\") " Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.275188 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11e8ca38-334e-4dfd-a174-f02bfc8c69ec" (UID: "11e8ca38-334e-4dfd-a174-f02bfc8c69ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.276417 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34f06565-e23b-4e08-88d7-280cb402977a" (UID: "34f06565-e23b-4e08-88d7-280cb402977a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.280700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp" (OuterVolumeSpecName: "kube-api-access-hsxcp") pod "34f06565-e23b-4e08-88d7-280cb402977a" (UID: "34f06565-e23b-4e08-88d7-280cb402977a"). InnerVolumeSpecName "kube-api-access-hsxcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.281507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4" (OuterVolumeSpecName: "kube-api-access-xz9p4") pod "11e8ca38-334e-4dfd-a174-f02bfc8c69ec" (UID: "11e8ca38-334e-4dfd-a174-f02bfc8c69ec"). InnerVolumeSpecName "kube-api-access-xz9p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.376514 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34f06565-e23b-4e08-88d7-280cb402977a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.376552 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsxcp\" (UniqueName: \"kubernetes.io/projected/34f06565-e23b-4e08-88d7-280cb402977a-kube-api-access-hsxcp\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.376564 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.376573 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9p4\" (UniqueName: \"kubernetes.io/projected/11e8ca38-334e-4dfd-a174-f02bfc8c69ec-kube-api-access-xz9p4\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.905804 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xbvzf" event={"ID":"34f06565-e23b-4e08-88d7-280cb402977a","Type":"ContainerDied","Data":"2b011582f5e59789781ed40fe989106f2233c696ee7907c8a2b546a38e72fa03"} Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.905854 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b011582f5e59789781ed40fe989106f2233c696ee7907c8a2b546a38e72fa03" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.905913 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xbvzf" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.908023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ks8ts" event={"ID":"b46e54e9-1ffb-4094-a42a-0d7a86fff17c","Type":"ContainerStarted","Data":"14673e997aee94b2241a81d4da553e1d456c72da2085c10adbf69eec15a43711"} Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.910126 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0131-account-create-h9pmh" event={"ID":"11e8ca38-334e-4dfd-a174-f02bfc8c69ec","Type":"ContainerDied","Data":"4bcef96bdd197f68e015e8353f804ad89288b2e80aba2ea709dcb8011c996c5c"} Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.910155 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bcef96bdd197f68e015e8353f804ad89288b2e80aba2ea709dcb8011c996c5c" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.910186 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0131-account-create-h9pmh" Nov 24 17:07:09 crc kubenswrapper[4768]: I1124 17:07:09.928135 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ks8ts" podStartSLOduration=2.86380745 podStartE2EDuration="6.928116982s" podCreationTimestamp="2025-11-24 17:07:03 +0000 UTC" firstStartedPulling="2025-11-24 17:07:04.834488745 +0000 UTC m=+906.081457403" lastFinishedPulling="2025-11-24 17:07:08.898798277 +0000 UTC m=+910.145766935" observedRunningTime="2025-11-24 17:07:09.925172378 +0000 UTC m=+911.172141066" watchObservedRunningTime="2025-11-24 17:07:09.928116982 +0000 UTC m=+911.175085630" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.176565 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gkks2"] Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177200 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e8ca38-334e-4dfd-a174-f02bfc8c69ec" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177214 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e8ca38-334e-4dfd-a174-f02bfc8c69ec" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177271 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f06565-e23b-4e08-88d7-280cb402977a" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177277 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f06565-e23b-4e08-88d7-280cb402977a" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177285 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3089d03-ccf1-4aff-ac88-45fefc76ec67" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177292 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3089d03-ccf1-4aff-ac88-45fefc76ec67" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177311 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9e0002-d010-466b-9a99-c3a4ae2bf020" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177316 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9e0002-d010-466b-9a99-c3a4ae2bf020" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177328 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c5b0d6-d045-486b-88fa-32652a7d875f" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177333 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c5b0d6-d045-486b-88fa-32652a7d875f" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.177365 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339680b8-1c63-400b-92a2-5b3dff0d90f3" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177372 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="339680b8-1c63-400b-92a2-5b3dff0d90f3" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177587 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3089d03-ccf1-4aff-ac88-45fefc76ec67" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177599 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e8ca38-334e-4dfd-a174-f02bfc8c69ec" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177611 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="339680b8-1c63-400b-92a2-5b3dff0d90f3" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177617 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9e0002-d010-466b-9a99-c3a4ae2bf020" containerName="mariadb-account-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177631 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f06565-e23b-4e08-88d7-280cb402977a" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.177642 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c5b0d6-d045-486b-88fa-32652a7d875f" containerName="mariadb-database-create" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.178231 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.181546 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nwmdg" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.181546 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.183921 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gkks2"] Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.307669 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.307716 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbcgq\" (UniqueName: \"kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.307975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.308100 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.409896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.410066 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.410105 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.410140 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbcgq\" (UniqueName: \"kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.416322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.416468 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.424127 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.446702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbcgq\" (UniqueName: \"kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq\") pod \"glance-db-sync-gkks2\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.495661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.715157 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.715558 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.715589 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 17:07:11 crc kubenswrapper[4768]: E1124 17:07:11.715650 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift podName:1b76679b-41cc-4ddf-898b-5a05b5cfa052 nodeName:}" failed. No retries permitted until 2025-11-24 17:07:19.715629428 +0000 UTC m=+920.962598096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift") pod "swift-storage-0" (UID: "1b76679b-41cc-4ddf-898b-5a05b5cfa052") : configmap "swift-ring-files" not found Nov 24 17:07:11 crc kubenswrapper[4768]: I1124 17:07:11.997616 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gkks2"] Nov 24 17:07:12 crc kubenswrapper[4768]: W1124 17:07:12.004436 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8b577a7_e026_4976_8737_8d103f7b2c7b.slice/crio-e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89 WatchSource:0}: Error finding container e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89: Status 404 returned error can't find the container with id e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89 Nov 24 17:07:12 crc kubenswrapper[4768]: I1124 17:07:12.932566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gkks2" event={"ID":"f8b577a7-e026-4976-8737-8d103f7b2c7b","Type":"ContainerStarted","Data":"e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89"} Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.110592 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.167959 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.168194 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="dnsmasq-dns" containerID="cri-o://c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9" gracePeriod=10 Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.716925 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.851894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc\") pod \"80a9201c-6219-4268-bca9-b285b28d1c52\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.852013 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsmqv\" (UniqueName: \"kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv\") pod \"80a9201c-6219-4268-bca9-b285b28d1c52\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.852082 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config\") pod \"80a9201c-6219-4268-bca9-b285b28d1c52\" (UID: \"80a9201c-6219-4268-bca9-b285b28d1c52\") " Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.873330 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv" (OuterVolumeSpecName: "kube-api-access-bsmqv") pod "80a9201c-6219-4268-bca9-b285b28d1c52" (UID: "80a9201c-6219-4268-bca9-b285b28d1c52"). InnerVolumeSpecName "kube-api-access-bsmqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.892399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80a9201c-6219-4268-bca9-b285b28d1c52" (UID: "80a9201c-6219-4268-bca9-b285b28d1c52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.895031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config" (OuterVolumeSpecName: "config") pod "80a9201c-6219-4268-bca9-b285b28d1c52" (UID: "80a9201c-6219-4268-bca9-b285b28d1c52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.941741 4768 generic.go:334] "Generic (PLEG): container finished" podID="80a9201c-6219-4268-bca9-b285b28d1c52" containerID="c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9" exitCode=0 Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.941785 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" event={"ID":"80a9201c-6219-4268-bca9-b285b28d1c52","Type":"ContainerDied","Data":"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9"} Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.941812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" event={"ID":"80a9201c-6219-4268-bca9-b285b28d1c52","Type":"ContainerDied","Data":"13a2e81a92a7684a6b01588939abb10b839dcca233c37c36760d31a8a5725536"} Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.941828 4768 scope.go:117] "RemoveContainer" containerID="c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.941936 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-kx42s" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.954973 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.955071 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a9201c-6219-4268-bca9-b285b28d1c52-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.955082 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsmqv\" (UniqueName: \"kubernetes.io/projected/80a9201c-6219-4268-bca9-b285b28d1c52-kube-api-access-bsmqv\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.969434 4768 scope.go:117] "RemoveContainer" containerID="cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8" Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.990981 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:07:13 crc kubenswrapper[4768]: I1124 17:07:13.998334 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-kx42s"] Nov 24 17:07:14 crc kubenswrapper[4768]: I1124 17:07:14.000611 4768 scope.go:117] "RemoveContainer" containerID="c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9" Nov 24 17:07:14 crc kubenswrapper[4768]: E1124 17:07:14.002558 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9\": container with ID starting with c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9 not found: ID does not exist" containerID="c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9" Nov 24 17:07:14 crc kubenswrapper[4768]: I1124 17:07:14.002618 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9"} err="failed to get container status \"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9\": rpc error: code = NotFound desc = could not find container \"c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9\": container with ID starting with c419171365563888fbf4b621e6a215bf4a4f8547f70dbdb5967b69547e0f8de9 not found: ID does not exist" Nov 24 17:07:14 crc kubenswrapper[4768]: I1124 17:07:14.002652 4768 scope.go:117] "RemoveContainer" containerID="cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8" Nov 24 17:07:14 crc kubenswrapper[4768]: E1124 17:07:14.003651 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8\": container with ID starting with cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8 not found: ID does not exist" containerID="cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8" Nov 24 17:07:14 crc kubenswrapper[4768]: I1124 17:07:14.003681 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8"} err="failed to get container status \"cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8\": rpc error: code = NotFound desc = could not find container \"cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8\": container with ID starting with cac6083a46080753618ca13da562ad219e2689230ed695cb6d113a1191a7efb8 not found: ID does not exist" Nov 24 17:07:15 crc kubenswrapper[4768]: I1124 17:07:15.590672 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" path="/var/lib/kubelet/pods/80a9201c-6219-4268-bca9-b285b28d1c52/volumes" Nov 24 17:07:15 crc kubenswrapper[4768]: I1124 17:07:15.962677 4768 generic.go:334] "Generic (PLEG): container finished" podID="b46e54e9-1ffb-4094-a42a-0d7a86fff17c" containerID="14673e997aee94b2241a81d4da553e1d456c72da2085c10adbf69eec15a43711" exitCode=0 Nov 24 17:07:15 crc kubenswrapper[4768]: I1124 17:07:15.962723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ks8ts" event={"ID":"b46e54e9-1ffb-4094-a42a-0d7a86fff17c","Type":"ContainerDied","Data":"14673e997aee94b2241a81d4da553e1d456c72da2085c10adbf69eec15a43711"} Nov 24 17:07:16 crc kubenswrapper[4768]: I1124 17:07:16.360701 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8j94t" podUID="df42583a-33cf-4b89-9f69-7f3baeb6e7b5" containerName="ovn-controller" probeResult="failure" output=< Nov 24 17:07:16 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 17:07:16 crc kubenswrapper[4768]: > Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.760428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.806815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1b76679b-41cc-4ddf-898b-5a05b5cfa052-etc-swift\") pod \"swift-storage-0\" (UID: \"1b76679b-41cc-4ddf-898b-5a05b5cfa052\") " pod="openstack/swift-storage-0" Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.993340 4768 generic.go:334] "Generic (PLEG): container finished" podID="e47b81a6-f793-404b-9713-121732eea148" containerID="f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119" exitCode=0 Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.993416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerDied","Data":"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119"} Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.996959 4768 generic.go:334] "Generic (PLEG): container finished" podID="4fcab967-8d79-401f-927b-8770680c9c30" containerID="8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c" exitCode=0 Nov 24 17:07:19 crc kubenswrapper[4768]: I1124 17:07:19.997015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerDied","Data":"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c"} Nov 24 17:07:20 crc kubenswrapper[4768]: I1124 17:07:20.095058 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.363632 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8j94t" podUID="df42583a-33cf-4b89-9f69-7f3baeb6e7b5" containerName="ovn-controller" probeResult="failure" output=< Nov 24 17:07:21 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 17:07:21 crc kubenswrapper[4768]: > Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.378774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.395079 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zpbbq" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.618290 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8j94t-config-xjvxv"] Nov 24 17:07:21 crc kubenswrapper[4768]: E1124 17:07:21.618923 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="dnsmasq-dns" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.618947 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="dnsmasq-dns" Nov 24 17:07:21 crc kubenswrapper[4768]: E1124 17:07:21.619018 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="init" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.619029 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="init" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.619240 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a9201c-6219-4268-bca9-b285b28d1c52" containerName="dnsmasq-dns" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.619957 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.622555 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.634522 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8j94t-config-xjvxv"] Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.794652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.794749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.794805 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbv7\" (UniqueName: \"kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.794831 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.794960 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.795029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896732 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896835 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbv7\" (UniqueName: \"kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.896918 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.897153 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.897155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.897152 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.897863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.899828 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.914906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbv7\" (UniqueName: \"kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7\") pod \"ovn-controller-8j94t-config-xjvxv\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:21 crc kubenswrapper[4768]: I1124 17:07:21.952945 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.466172 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.614296 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph86c\" (UniqueName: \"kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621046 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621124 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621236 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.621272 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift\") pod \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\" (UID: \"b46e54e9-1ffb-4094-a42a-0d7a86fff17c\") " Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.620958 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c" (OuterVolumeSpecName: "kube-api-access-ph86c") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "kube-api-access-ph86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.622681 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.623560 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.640398 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.641831 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts" (OuterVolumeSpecName: "scripts") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.643199 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.668371 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b46e54e9-1ffb-4094-a42a-0d7a86fff17c" (UID: "b46e54e9-1ffb-4094-a42a-0d7a86fff17c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723657 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph86c\" (UniqueName: \"kubernetes.io/projected/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-kube-api-access-ph86c\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723688 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723698 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723708 4768 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723717 4768 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723726 4768 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.723738 4768 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b46e54e9-1ffb-4094-a42a-0d7a86fff17c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.874110 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8j94t-config-xjvxv"] Nov 24 17:07:22 crc kubenswrapper[4768]: W1124 17:07:22.874673 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466c4a77_dad1_4d48_8ae8_1e7d87ba4c65.slice/crio-fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008 WatchSource:0}: Error finding container fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008: Status 404 returned error can't find the container with id fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008 Nov 24 17:07:22 crc kubenswrapper[4768]: I1124 17:07:22.879990 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.024494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerStarted","Data":"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b"} Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.025999 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.034605 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ks8ts" event={"ID":"b46e54e9-1ffb-4094-a42a-0d7a86fff17c","Type":"ContainerDied","Data":"9641a39613d934b3de85ff5dfd37159be4369e8cd0dee14d513c16b573f71487"} Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.034642 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9641a39613d934b3de85ff5dfd37159be4369e8cd0dee14d513c16b573f71487" Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.034643 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ks8ts" Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.039039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"d901d081f3e54cc1da8906a625a2de045b33d940d783af3c54e47d24b78ecda1"} Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.046991 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.884901718 podStartE2EDuration="57.046974263s" podCreationTimestamp="2025-11-24 17:06:26 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.308473658 +0000 UTC m=+878.555442316" lastFinishedPulling="2025-11-24 17:06:44.470546203 +0000 UTC m=+885.717514861" observedRunningTime="2025-11-24 17:07:23.046265183 +0000 UTC m=+924.293233841" watchObservedRunningTime="2025-11-24 17:07:23.046974263 +0000 UTC m=+924.293942921" Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.047641 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerStarted","Data":"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427"} Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.048494 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.051267 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8j94t-config-xjvxv" event={"ID":"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65","Type":"ContainerStarted","Data":"fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008"} Nov 24 17:07:23 crc kubenswrapper[4768]: I1124 17:07:23.076757 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.442102632 podStartE2EDuration="57.076739354s" podCreationTimestamp="2025-11-24 17:06:26 +0000 UTC" firstStartedPulling="2025-11-24 17:06:37.285103367 +0000 UTC m=+878.532072025" lastFinishedPulling="2025-11-24 17:06:44.919740099 +0000 UTC m=+886.166708747" observedRunningTime="2025-11-24 17:07:23.073377049 +0000 UTC m=+924.320345707" watchObservedRunningTime="2025-11-24 17:07:23.076739354 +0000 UTC m=+924.323708012" Nov 24 17:07:24 crc kubenswrapper[4768]: I1124 17:07:24.059725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gkks2" event={"ID":"f8b577a7-e026-4976-8737-8d103f7b2c7b","Type":"ContainerStarted","Data":"947367c61d4f4f77a219bccbc69d51031bb44c69cdb44db4244997b7f7ae8e23"} Nov 24 17:07:24 crc kubenswrapper[4768]: I1124 17:07:24.062852 4768 generic.go:334] "Generic (PLEG): container finished" podID="466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" containerID="a34c8f2c3ae2c660ac2228952301650a017b2e17658e6e697d51618819f3c7e9" exitCode=0 Nov 24 17:07:24 crc kubenswrapper[4768]: I1124 17:07:24.063086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8j94t-config-xjvxv" event={"ID":"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65","Type":"ContainerDied","Data":"a34c8f2c3ae2c660ac2228952301650a017b2e17658e6e697d51618819f3c7e9"} Nov 24 17:07:24 crc kubenswrapper[4768]: I1124 17:07:24.076152 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gkks2" podStartSLOduration=2.702350385 podStartE2EDuration="13.076115772s" podCreationTimestamp="2025-11-24 17:07:11 +0000 UTC" firstStartedPulling="2025-11-24 17:07:12.005953855 +0000 UTC m=+913.252922523" lastFinishedPulling="2025-11-24 17:07:22.379719252 +0000 UTC m=+923.626687910" observedRunningTime="2025-11-24 17:07:24.071845911 +0000 UTC m=+925.318814589" watchObservedRunningTime="2025-11-24 17:07:24.076115772 +0000 UTC m=+925.323084430" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.127455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"6e6e453f70b5e9bb9ccfd604294de0b7fc9ffab3a16e2f62e3652ccd2a9c110c"} Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.127747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"081cb60de15d806d423099147834ed395a3dee7cd804a22a5ae8d7b804f65130"} Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.127758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"f013cf6fa6d00ca86480a79cff3226fa18304482bbcd5c3e8abddec85ee477ac"} Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.439423 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501551 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501589 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501650 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501672 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kbv7\" (UniqueName: \"kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501689 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn\") pod \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\" (UID: \"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65\") " Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501697 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run" (OuterVolumeSpecName: "var-run") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.501792 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.502043 4768 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.502058 4768 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.502086 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.502408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.502825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts" (OuterVolumeSpecName: "scripts") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.508176 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7" (OuterVolumeSpecName: "kube-api-access-4kbv7") pod "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" (UID: "466c4a77-dad1-4d48-8ae8-1e7d87ba4c65"). InnerVolumeSpecName "kube-api-access-4kbv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.603337 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kbv7\" (UniqueName: \"kubernetes.io/projected/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-kube-api-access-4kbv7\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.603372 4768 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.603382 4768 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:25 crc kubenswrapper[4768]: I1124 17:07:25.603391 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.140332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8j94t-config-xjvxv" event={"ID":"466c4a77-dad1-4d48-8ae8-1e7d87ba4c65","Type":"ContainerDied","Data":"fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008"} Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.140416 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6bd130931a6eb3fd67dac08f38f309555cbc8cdcc6f00fb11ffbb753bba008" Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.140367 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8j94t-config-xjvxv" Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.145132 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"4588f3b684c04c47a9137dba6e2c2ed1608d0746908452f505c4abdcd0dd3bd6"} Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.372601 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8j94t" Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.531809 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8j94t-config-xjvxv"] Nov 24 17:07:26 crc kubenswrapper[4768]: I1124 17:07:26.536920 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8j94t-config-xjvxv"] Nov 24 17:07:27 crc kubenswrapper[4768]: I1124 17:07:27.174774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"efd52eda930353623e373b828377808866686b905ac88f0ef0b1093dabd17826"} Nov 24 17:07:27 crc kubenswrapper[4768]: I1124 17:07:27.175080 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"4a065cb5dc2eb73f874d4467b8adad2e7ce41796c2a88a0202035548ef77ef81"} Nov 24 17:07:27 crc kubenswrapper[4768]: I1124 17:07:27.594545 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" path="/var/lib/kubelet/pods/466c4a77-dad1-4d48-8ae8-1e7d87ba4c65/volumes" Nov 24 17:07:28 crc kubenswrapper[4768]: I1124 17:07:28.190383 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"8f5ea1ee1844ebec4807a6beb2b4e5414d7f49b7a3928fb2238285f3122dbe60"} Nov 24 17:07:28 crc kubenswrapper[4768]: I1124 17:07:28.190425 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"f28fc4291372d8a5c839207f086108b4a47b0b92af62a1ffb94a9ee1e613ef33"} Nov 24 17:07:29 crc kubenswrapper[4768]: I1124 17:07:29.202468 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"66c911cc75a44d2a34c85a6466887fe865139610c90bfb583b706c7447cf7fc9"} Nov 24 17:07:29 crc kubenswrapper[4768]: I1124 17:07:29.202784 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"2dd6d3b346f516e4c2c9ec2971e41bbc090841a3ca5ca324815be75ede72cc73"} Nov 24 17:07:29 crc kubenswrapper[4768]: I1124 17:07:29.202796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"69f370d20d5f1ca886092849359c27cf96a30aec6999a69915436d6172f027e4"} Nov 24 17:07:29 crc kubenswrapper[4768]: I1124 17:07:29.202805 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"51ed615624ffd0d27a1819c10c140716cb8be1f1e6bcd9451c22f5c06793e1bb"} Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.211397 4768 generic.go:334] "Generic (PLEG): container finished" podID="f8b577a7-e026-4976-8737-8d103f7b2c7b" containerID="947367c61d4f4f77a219bccbc69d51031bb44c69cdb44db4244997b7f7ae8e23" exitCode=0 Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.211489 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gkks2" event={"ID":"f8b577a7-e026-4976-8737-8d103f7b2c7b","Type":"ContainerDied","Data":"947367c61d4f4f77a219bccbc69d51031bb44c69cdb44db4244997b7f7ae8e23"} Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.223544 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"f02387c6a5c74cd97ce5fce70991062199f711a4210991bcaa6353f00141e71c"} Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.223619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"e6bf124bcc5b50daeb2bef76fba193a2cf1ecd6357069a12bccb87d955d76334"} Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.223651 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1b76679b-41cc-4ddf-898b-5a05b5cfa052","Type":"ContainerStarted","Data":"25c75965c9b802bc263630e111cfb9f9ec44c1d079fbdd2a6f099c6f6c42900d"} Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.286242 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.74643219 podStartE2EDuration="28.286223269s" podCreationTimestamp="2025-11-24 17:07:02 +0000 UTC" firstStartedPulling="2025-11-24 17:07:22.883579994 +0000 UTC m=+924.130548652" lastFinishedPulling="2025-11-24 17:07:28.423371073 +0000 UTC m=+929.670339731" observedRunningTime="2025-11-24 17:07:30.278941463 +0000 UTC m=+931.525910131" watchObservedRunningTime="2025-11-24 17:07:30.286223269 +0000 UTC m=+931.533191927" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.584209 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:30 crc kubenswrapper[4768]: E1124 17:07:30.584540 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" containerName="ovn-config" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.584556 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" containerName="ovn-config" Nov 24 17:07:30 crc kubenswrapper[4768]: E1124 17:07:30.584580 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46e54e9-1ffb-4094-a42a-0d7a86fff17c" containerName="swift-ring-rebalance" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.584587 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46e54e9-1ffb-4094-a42a-0d7a86fff17c" containerName="swift-ring-rebalance" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.584755 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="466c4a77-dad1-4d48-8ae8-1e7d87ba4c65" containerName="ovn-config" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.584816 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46e54e9-1ffb-4094-a42a-0d7a86fff17c" containerName="swift-ring-rebalance" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.585661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.588015 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.607310 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678034 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wbnb\" (UniqueName: \"kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678093 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678307 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.678580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.780958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wbnb\" (UniqueName: \"kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.781023 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.781049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.781112 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.781161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.781194 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.782022 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.782060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.782288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.782461 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.782475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.803929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wbnb\" (UniqueName: \"kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb\") pod \"dnsmasq-dns-764c5664d7-x8djn\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:30 crc kubenswrapper[4768]: I1124 17:07:30.903306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.389925 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:31 crc kubenswrapper[4768]: W1124 17:07:31.404565 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c7ac923_fcb5_48fa_bc2d_f73fd87a5d62.slice/crio-4c9d6eb0daf604cfe4d5467bedb7b9ae7031665eba1dda29221b8436cc4c066f WatchSource:0}: Error finding container 4c9d6eb0daf604cfe4d5467bedb7b9ae7031665eba1dda29221b8436cc4c066f: Status 404 returned error can't find the container with id 4c9d6eb0daf604cfe4d5467bedb7b9ae7031665eba1dda29221b8436cc4c066f Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.562817 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.600531 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data\") pod \"f8b577a7-e026-4976-8737-8d103f7b2c7b\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.600588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle\") pod \"f8b577a7-e026-4976-8737-8d103f7b2c7b\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.600662 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data\") pod \"f8b577a7-e026-4976-8737-8d103f7b2c7b\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.600687 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbcgq\" (UniqueName: \"kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq\") pod \"f8b577a7-e026-4976-8737-8d103f7b2c7b\" (UID: \"f8b577a7-e026-4976-8737-8d103f7b2c7b\") " Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.604954 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq" (OuterVolumeSpecName: "kube-api-access-kbcgq") pod "f8b577a7-e026-4976-8737-8d103f7b2c7b" (UID: "f8b577a7-e026-4976-8737-8d103f7b2c7b"). InnerVolumeSpecName "kube-api-access-kbcgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.605125 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f8b577a7-e026-4976-8737-8d103f7b2c7b" (UID: "f8b577a7-e026-4976-8737-8d103f7b2c7b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.624161 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8b577a7-e026-4976-8737-8d103f7b2c7b" (UID: "f8b577a7-e026-4976-8737-8d103f7b2c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.663215 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data" (OuterVolumeSpecName: "config-data") pod "f8b577a7-e026-4976-8737-8d103f7b2c7b" (UID: "f8b577a7-e026-4976-8737-8d103f7b2c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.703205 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.703247 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.703261 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b577a7-e026-4976-8737-8d103f7b2c7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:31 crc kubenswrapper[4768]: I1124 17:07:31.703276 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbcgq\" (UniqueName: \"kubernetes.io/projected/f8b577a7-e026-4976-8737-8d103f7b2c7b-kube-api-access-kbcgq\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.242738 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gkks2" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.242761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gkks2" event={"ID":"f8b577a7-e026-4976-8737-8d103f7b2c7b","Type":"ContainerDied","Data":"e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89"} Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.243268 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46521e1f9f778ce71f1596a968c2d2947427a8f96185c4cfa9d324330acfa89" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.245078 4768 generic.go:334] "Generic (PLEG): container finished" podID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerID="c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b" exitCode=0 Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.245140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" event={"ID":"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62","Type":"ContainerDied","Data":"c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b"} Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.245181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" event={"ID":"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62","Type":"ContainerStarted","Data":"4c9d6eb0daf604cfe4d5467bedb7b9ae7031665eba1dda29221b8436cc4c066f"} Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.679105 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.687472 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:07:32 crc kubenswrapper[4768]: E1124 17:07:32.687909 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b577a7-e026-4976-8737-8d103f7b2c7b" containerName="glance-db-sync" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.687932 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b577a7-e026-4976-8737-8d103f7b2c7b" containerName="glance-db-sync" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.688154 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b577a7-e026-4976-8737-8d103f7b2c7b" containerName="glance-db-sync" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.689178 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.698162 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735067 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735193 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735225 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.735287 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9bpv\" (UniqueName: \"kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836333 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.836473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9bpv\" (UniqueName: \"kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.837189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.837286 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.837502 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.837602 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.837657 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:32 crc kubenswrapper[4768]: I1124 17:07:32.854899 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9bpv\" (UniqueName: \"kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv\") pod \"dnsmasq-dns-74f6bcbc87-gnsqw\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:33 crc kubenswrapper[4768]: I1124 17:07:33.007980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:33 crc kubenswrapper[4768]: I1124 17:07:33.254810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" event={"ID":"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62","Type":"ContainerStarted","Data":"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559"} Nov 24 17:07:33 crc kubenswrapper[4768]: I1124 17:07:33.256927 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:33 crc kubenswrapper[4768]: I1124 17:07:33.285452 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:07:33 crc kubenswrapper[4768]: I1124 17:07:33.292701 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" podStartSLOduration=3.292686541 podStartE2EDuration="3.292686541s" podCreationTimestamp="2025-11-24 17:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:33.274884537 +0000 UTC m=+934.521853195" watchObservedRunningTime="2025-11-24 17:07:33.292686541 +0000 UTC m=+934.539655189" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.264975 4768 generic.go:334] "Generic (PLEG): container finished" podID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerID="45a3afd5afab84eb01771e39cfc50869f354d26f81367aad5e74e09aa642e1d7" exitCode=0 Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.265077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" event={"ID":"76b0f1d1-484b-4959-963f-35a843f11fcc","Type":"ContainerDied","Data":"45a3afd5afab84eb01771e39cfc50869f354d26f81367aad5e74e09aa642e1d7"} Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.265396 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" event={"ID":"76b0f1d1-484b-4959-963f-35a843f11fcc","Type":"ContainerStarted","Data":"df228b4cd09f256279bbe3ae0002a5a709d5c3bbe7920b809a9fd04f2e668e60"} Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.265423 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="dnsmasq-dns" containerID="cri-o://533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559" gracePeriod=10 Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.745607 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766547 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766632 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wbnb\" (UniqueName: \"kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766723 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.766741 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0\") pod \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\" (UID: \"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62\") " Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.787318 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb" (OuterVolumeSpecName: "kube-api-access-2wbnb") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "kube-api-access-2wbnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.812244 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config" (OuterVolumeSpecName: "config") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.819728 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.824301 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.825800 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.831196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" (UID: "4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869034 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869090 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869103 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869115 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wbnb\" (UniqueName: \"kubernetes.io/projected/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-kube-api-access-2wbnb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869128 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:34 crc kubenswrapper[4768]: I1124 17:07:34.869160 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.274936 4768 generic.go:334] "Generic (PLEG): container finished" podID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerID="533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559" exitCode=0 Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.275004 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.274982 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" event={"ID":"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62","Type":"ContainerDied","Data":"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559"} Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.275766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-x8djn" event={"ID":"4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62","Type":"ContainerDied","Data":"4c9d6eb0daf604cfe4d5467bedb7b9ae7031665eba1dda29221b8436cc4c066f"} Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.275803 4768 scope.go:117] "RemoveContainer" containerID="533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.277248 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" event={"ID":"76b0f1d1-484b-4959-963f-35a843f11fcc","Type":"ContainerStarted","Data":"e6bb1e1a1b7e61de94bb06703b58b69677a946f38e0448fd0c8a6340c64f9f1d"} Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.278034 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.294389 4768 scope.go:117] "RemoveContainer" containerID="c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.334216 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" podStartSLOduration=3.334196666 podStartE2EDuration="3.334196666s" podCreationTimestamp="2025-11-24 17:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:35.304171767 +0000 UTC m=+936.551140425" watchObservedRunningTime="2025-11-24 17:07:35.334196666 +0000 UTC m=+936.581165324" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.336631 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.355385 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-x8djn"] Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.359207 4768 scope.go:117] "RemoveContainer" containerID="533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559" Nov 24 17:07:35 crc kubenswrapper[4768]: E1124 17:07:35.359646 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559\": container with ID starting with 533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559 not found: ID does not exist" containerID="533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.359687 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559"} err="failed to get container status \"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559\": rpc error: code = NotFound desc = could not find container \"533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559\": container with ID starting with 533e8b2fca02a2f217808c5473e838fd2f95715e6703fdff5f3a2f074ac25559 not found: ID does not exist" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.359713 4768 scope.go:117] "RemoveContainer" containerID="c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b" Nov 24 17:07:35 crc kubenswrapper[4768]: E1124 17:07:35.360367 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b\": container with ID starting with c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b not found: ID does not exist" containerID="c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.360393 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b"} err="failed to get container status \"c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b\": rpc error: code = NotFound desc = could not find container \"c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b\": container with ID starting with c57d7605eb440cab26aa0e462d79f88a56aa676255ad28e1a9e262a6c217c34b not found: ID does not exist" Nov 24 17:07:35 crc kubenswrapper[4768]: I1124 17:07:35.592397 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" path="/var/lib/kubelet/pods/4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62/volumes" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.618819 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.929066 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-d9f4t"] Nov 24 17:07:37 crc kubenswrapper[4768]: E1124 17:07:37.929622 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="dnsmasq-dns" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.929641 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="dnsmasq-dns" Nov 24 17:07:37 crc kubenswrapper[4768]: E1124 17:07:37.929680 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="init" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.929688 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="init" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.929859 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7ac923-fcb5-48fa-bc2d-f73fd87a5d62" containerName="dnsmasq-dns" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.930402 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.939327 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-d9f4t"] Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.950533 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-698c-account-create-kdm74"] Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.951513 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.953180 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.977533 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:07:37 crc kubenswrapper[4768]: I1124 17:07:37.986489 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-698c-account-create-kdm74"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.023771 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6jln2"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.025123 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.039966 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6jln2"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.060435 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.060479 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.060536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4s4\" (UniqueName: \"kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.060610 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn9ld\" (UniqueName: \"kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.134214 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7c6d-account-create-76dsd"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.135190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.136960 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.148543 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7c6d-account-create-76dsd"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162251 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4s4\" (UniqueName: \"kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn9ld\" (UniqueName: \"kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5hp\" (UniqueName: \"kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162448 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.162465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.163122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.163608 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.191315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4s4\" (UniqueName: \"kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4\") pod \"cinder-db-create-d9f4t\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.191317 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn9ld\" (UniqueName: \"kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld\") pod \"cinder-698c-account-create-kdm74\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.232497 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2cnhg"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.234864 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.247733 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.247897 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.247965 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hz2rd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.248051 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.248593 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.249434 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2cnhg"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.269461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh7t7\" (UniqueName: \"kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.269587 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.269671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5hp\" (UniqueName: \"kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.269706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.270545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.279110 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.293363 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5hp\" (UniqueName: \"kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp\") pod \"barbican-db-create-6jln2\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.340102 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.346716 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-q2dwl"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.347871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.355892 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ede5-account-create-4bcf7"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.356980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.369233 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ede5-account-create-4bcf7"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.373264 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.374415 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q2dwl"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.376115 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6m2\" (UniqueName: \"kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.376149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.376187 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh7t7\" (UniqueName: \"kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.376271 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.376304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.379999 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.399842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh7t7\" (UniqueName: \"kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7\") pod \"barbican-7c6d-account-create-76dsd\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.449915 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478208 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478285 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ddtp\" (UniqueName: \"kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4gp2\" (UniqueName: \"kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478486 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b6m2\" (UniqueName: \"kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.478610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.486170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.489598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.501158 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b6m2\" (UniqueName: \"kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2\") pod \"keystone-db-sync-2cnhg\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.583134 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.583184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ddtp\" (UniqueName: \"kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.583208 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4gp2\" (UniqueName: \"kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.583225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.584125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.584622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.599455 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ddtp\" (UniqueName: \"kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp\") pod \"neutron-ede5-account-create-4bcf7\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.606219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4gp2\" (UniqueName: \"kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2\") pod \"neutron-db-create-q2dwl\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.674810 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.702798 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.716860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.805950 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-698c-account-create-kdm74"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.881685 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6jln2"] Nov 24 17:07:38 crc kubenswrapper[4768]: I1124 17:07:38.903175 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-d9f4t"] Nov 24 17:07:38 crc kubenswrapper[4768]: W1124 17:07:38.908829 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b7d2e95_2ad1_47c3_a97c_65d3821742b3.slice/crio-cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e WatchSource:0}: Error finding container cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e: Status 404 returned error can't find the container with id cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e Nov 24 17:07:38 crc kubenswrapper[4768]: W1124 17:07:38.921028 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5acf409c_da92_42ec_982a_b2d1f34be104.slice/crio-974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e WatchSource:0}: Error finding container 974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e: Status 404 returned error can't find the container with id 974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.047055 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7c6d-account-create-76dsd"] Nov 24 17:07:39 crc kubenswrapper[4768]: W1124 17:07:39.057383 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod607cc9c4_da33_4557_b019_18efe88914f5.slice/crio-025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f WatchSource:0}: Error finding container 025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f: Status 404 returned error can't find the container with id 025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.200195 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2cnhg"] Nov 24 17:07:39 crc kubenswrapper[4768]: W1124 17:07:39.209677 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod006943d1_b308_4fb2_8af1_b54310ff2deb.slice/crio-2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103 WatchSource:0}: Error finding container 2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103: Status 404 returned error can't find the container with id 2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103 Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.300179 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ede5-account-create-4bcf7"] Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.309301 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q2dwl"] Nov 24 17:07:39 crc kubenswrapper[4768]: W1124 17:07:39.312646 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8229b191_56fe_4e64_8a62_1213c86a792c.slice/crio-ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4 WatchSource:0}: Error finding container ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4: Status 404 returned error can't find the container with id ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4 Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.337395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q2dwl" event={"ID":"8229b191-56fe-4e64-8a62-1213c86a792c","Type":"ContainerStarted","Data":"ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.341388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6jln2" event={"ID":"7b7d2e95-2ad1-47c3-a97c-65d3821742b3","Type":"ContainerStarted","Data":"418266e40ed8e2d5bf7ffc2e7dbc0319643b345e815b68386c6847a7fbada2b2"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.341413 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6jln2" event={"ID":"7b7d2e95-2ad1-47c3-a97c-65d3821742b3","Type":"ContainerStarted","Data":"cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.343960 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2cnhg" event={"ID":"006943d1-b308-4fb2-8af1-b54310ff2deb","Type":"ContainerStarted","Data":"2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.345789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c6d-account-create-76dsd" event={"ID":"607cc9c4-da33-4557-b019-18efe88914f5","Type":"ContainerStarted","Data":"d06a5f55c14e469642212a9d801511ce728cc2a49fbaf94f476c90d880656dea"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.345810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c6d-account-create-76dsd" event={"ID":"607cc9c4-da33-4557-b019-18efe88914f5","Type":"ContainerStarted","Data":"025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.349236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-698c-account-create-kdm74" event={"ID":"e53c068b-3ea6-4b03-a740-e296a2f3f7e0","Type":"ContainerStarted","Data":"3d47d19127ffafb2575aff54155d061f52c9bd4fcd67da496d0aab96738c3e28"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.349261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-698c-account-create-kdm74" event={"ID":"e53c068b-3ea6-4b03-a740-e296a2f3f7e0","Type":"ContainerStarted","Data":"69e9535d2a067e04ae7cb8832cec88fdcb5ebf044a9a4c69462f0b6fd039aa48"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.358082 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d9f4t" event={"ID":"5acf409c-da92-42ec-982a-b2d1f34be104","Type":"ContainerStarted","Data":"4434652a1b4744b77d711b68b3dd8b8e4245cce746916904bcb638fcf3c65a47"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.358111 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d9f4t" event={"ID":"5acf409c-da92-42ec-982a-b2d1f34be104","Type":"ContainerStarted","Data":"974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.362626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ede5-account-create-4bcf7" event={"ID":"5f82633b-8229-49c8-92f3-4bffcd57f7ba","Type":"ContainerStarted","Data":"899c409cae5213a7a5f5c6c58a77964b11e22c33ff09af37dd72adcd7670dd8c"} Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.362905 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-6jln2" podStartSLOduration=1.3628947820000001 podStartE2EDuration="1.362894782s" podCreationTimestamp="2025-11-24 17:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:39.358746575 +0000 UTC m=+940.605715233" watchObservedRunningTime="2025-11-24 17:07:39.362894782 +0000 UTC m=+940.609863440" Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.380669 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-d9f4t" podStartSLOduration=2.380646824 podStartE2EDuration="2.380646824s" podCreationTimestamp="2025-11-24 17:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:39.379490331 +0000 UTC m=+940.626458989" watchObservedRunningTime="2025-11-24 17:07:39.380646824 +0000 UTC m=+940.627615482" Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.403619 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7c6d-account-create-76dsd" podStartSLOduration=1.403577092 podStartE2EDuration="1.403577092s" podCreationTimestamp="2025-11-24 17:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:39.4006573 +0000 UTC m=+940.647625958" watchObservedRunningTime="2025-11-24 17:07:39.403577092 +0000 UTC m=+940.650545750" Nov 24 17:07:39 crc kubenswrapper[4768]: I1124 17:07:39.430818 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-698c-account-create-kdm74" podStartSLOduration=2.430796772 podStartE2EDuration="2.430796772s" podCreationTimestamp="2025-11-24 17:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:39.414767698 +0000 UTC m=+940.661736356" watchObservedRunningTime="2025-11-24 17:07:39.430796772 +0000 UTC m=+940.677765430" Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.373242 4768 generic.go:334] "Generic (PLEG): container finished" podID="5acf409c-da92-42ec-982a-b2d1f34be104" containerID="4434652a1b4744b77d711b68b3dd8b8e4245cce746916904bcb638fcf3c65a47" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.373341 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d9f4t" event={"ID":"5acf409c-da92-42ec-982a-b2d1f34be104","Type":"ContainerDied","Data":"4434652a1b4744b77d711b68b3dd8b8e4245cce746916904bcb638fcf3c65a47"} Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.375204 4768 generic.go:334] "Generic (PLEG): container finished" podID="5f82633b-8229-49c8-92f3-4bffcd57f7ba" containerID="d825fb67a0cde5008343df7fb1b4b6d8fdc27806f37a6d0e4aca0a8c671190df" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.375261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ede5-account-create-4bcf7" event={"ID":"5f82633b-8229-49c8-92f3-4bffcd57f7ba","Type":"ContainerDied","Data":"d825fb67a0cde5008343df7fb1b4b6d8fdc27806f37a6d0e4aca0a8c671190df"} Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.377024 4768 generic.go:334] "Generic (PLEG): container finished" podID="8229b191-56fe-4e64-8a62-1213c86a792c" containerID="56ae578635b0020b061a390eed7444930d4cc1631910d6128afd196e89ce4f2a" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.377101 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q2dwl" event={"ID":"8229b191-56fe-4e64-8a62-1213c86a792c","Type":"ContainerDied","Data":"56ae578635b0020b061a390eed7444930d4cc1631910d6128afd196e89ce4f2a"} Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.378211 4768 generic.go:334] "Generic (PLEG): container finished" podID="7b7d2e95-2ad1-47c3-a97c-65d3821742b3" containerID="418266e40ed8e2d5bf7ffc2e7dbc0319643b345e815b68386c6847a7fbada2b2" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.378257 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6jln2" event={"ID":"7b7d2e95-2ad1-47c3-a97c-65d3821742b3","Type":"ContainerDied","Data":"418266e40ed8e2d5bf7ffc2e7dbc0319643b345e815b68386c6847a7fbada2b2"} Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.380102 4768 generic.go:334] "Generic (PLEG): container finished" podID="607cc9c4-da33-4557-b019-18efe88914f5" containerID="d06a5f55c14e469642212a9d801511ce728cc2a49fbaf94f476c90d880656dea" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.380154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c6d-account-create-76dsd" event={"ID":"607cc9c4-da33-4557-b019-18efe88914f5","Type":"ContainerDied","Data":"d06a5f55c14e469642212a9d801511ce728cc2a49fbaf94f476c90d880656dea"} Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.381567 4768 generic.go:334] "Generic (PLEG): container finished" podID="e53c068b-3ea6-4b03-a740-e296a2f3f7e0" containerID="3d47d19127ffafb2575aff54155d061f52c9bd4fcd67da496d0aab96738c3e28" exitCode=0 Nov 24 17:07:40 crc kubenswrapper[4768]: I1124 17:07:40.381594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-698c-account-create-kdm74" event={"ID":"e53c068b-3ea6-4b03-a740-e296a2f3f7e0","Type":"ContainerDied","Data":"3d47d19127ffafb2575aff54155d061f52c9bd4fcd67da496d0aab96738c3e28"} Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.793806 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.839908 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ddtp\" (UniqueName: \"kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp\") pod \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.840065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts\") pod \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\" (UID: \"5f82633b-8229-49c8-92f3-4bffcd57f7ba\") " Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.840798 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f82633b-8229-49c8-92f3-4bffcd57f7ba" (UID: "5f82633b-8229-49c8-92f3-4bffcd57f7ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.846261 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp" (OuterVolumeSpecName: "kube-api-access-8ddtp") pod "5f82633b-8229-49c8-92f3-4bffcd57f7ba" (UID: "5f82633b-8229-49c8-92f3-4bffcd57f7ba"). InnerVolumeSpecName "kube-api-access-8ddtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.941394 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ddtp\" (UniqueName: \"kubernetes.io/projected/5f82633b-8229-49c8-92f3-4bffcd57f7ba-kube-api-access-8ddtp\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.941415 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f82633b-8229-49c8-92f3-4bffcd57f7ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.958075 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.964298 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.970287 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.980062 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:41 crc kubenswrapper[4768]: I1124 17:07:41.988539 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042339 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh7t7\" (UniqueName: \"kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7\") pod \"607cc9c4-da33-4557-b019-18efe88914f5\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042432 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4s4\" (UniqueName: \"kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4\") pod \"5acf409c-da92-42ec-982a-b2d1f34be104\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042518 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn9ld\" (UniqueName: \"kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld\") pod \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042632 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts\") pod \"5acf409c-da92-42ec-982a-b2d1f34be104\" (UID: \"5acf409c-da92-42ec-982a-b2d1f34be104\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts\") pod \"8229b191-56fe-4e64-8a62-1213c86a792c\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts\") pod \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042754 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts\") pod \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\" (UID: \"e53c068b-3ea6-4b03-a740-e296a2f3f7e0\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042792 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4gp2\" (UniqueName: \"kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2\") pod \"8229b191-56fe-4e64-8a62-1213c86a792c\" (UID: \"8229b191-56fe-4e64-8a62-1213c86a792c\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042818 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq5hp\" (UniqueName: \"kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp\") pod \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\" (UID: \"7b7d2e95-2ad1-47c3-a97c-65d3821742b3\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.042863 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts\") pod \"607cc9c4-da33-4557-b019-18efe88914f5\" (UID: \"607cc9c4-da33-4557-b019-18efe88914f5\") " Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043183 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5acf409c-da92-42ec-982a-b2d1f34be104" (UID: "5acf409c-da92-42ec-982a-b2d1f34be104"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043260 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5acf409c-da92-42ec-982a-b2d1f34be104-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b7d2e95-2ad1-47c3-a97c-65d3821742b3" (UID: "7b7d2e95-2ad1-47c3-a97c-65d3821742b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8229b191-56fe-4e64-8a62-1213c86a792c" (UID: "8229b191-56fe-4e64-8a62-1213c86a792c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043668 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e53c068b-3ea6-4b03-a740-e296a2f3f7e0" (UID: "e53c068b-3ea6-4b03-a740-e296a2f3f7e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.043678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "607cc9c4-da33-4557-b019-18efe88914f5" (UID: "607cc9c4-da33-4557-b019-18efe88914f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.049634 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld" (OuterVolumeSpecName: "kube-api-access-hn9ld") pod "e53c068b-3ea6-4b03-a740-e296a2f3f7e0" (UID: "e53c068b-3ea6-4b03-a740-e296a2f3f7e0"). InnerVolumeSpecName "kube-api-access-hn9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.049659 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4" (OuterVolumeSpecName: "kube-api-access-fq4s4") pod "5acf409c-da92-42ec-982a-b2d1f34be104" (UID: "5acf409c-da92-42ec-982a-b2d1f34be104"). InnerVolumeSpecName "kube-api-access-fq4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.049696 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7" (OuterVolumeSpecName: "kube-api-access-gh7t7") pod "607cc9c4-da33-4557-b019-18efe88914f5" (UID: "607cc9c4-da33-4557-b019-18efe88914f5"). InnerVolumeSpecName "kube-api-access-gh7t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.049716 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp" (OuterVolumeSpecName: "kube-api-access-jq5hp") pod "7b7d2e95-2ad1-47c3-a97c-65d3821742b3" (UID: "7b7d2e95-2ad1-47c3-a97c-65d3821742b3"). InnerVolumeSpecName "kube-api-access-jq5hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.050497 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2" (OuterVolumeSpecName: "kube-api-access-z4gp2") pod "8229b191-56fe-4e64-8a62-1213c86a792c" (UID: "8229b191-56fe-4e64-8a62-1213c86a792c"). InnerVolumeSpecName "kube-api-access-z4gp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145022 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn9ld\" (UniqueName: \"kubernetes.io/projected/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-kube-api-access-hn9ld\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145062 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8229b191-56fe-4e64-8a62-1213c86a792c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145072 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145081 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53c068b-3ea6-4b03-a740-e296a2f3f7e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145091 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4gp2\" (UniqueName: \"kubernetes.io/projected/8229b191-56fe-4e64-8a62-1213c86a792c-kube-api-access-z4gp2\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145102 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq5hp\" (UniqueName: \"kubernetes.io/projected/7b7d2e95-2ad1-47c3-a97c-65d3821742b3-kube-api-access-jq5hp\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145112 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607cc9c4-da33-4557-b019-18efe88914f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145121 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh7t7\" (UniqueName: \"kubernetes.io/projected/607cc9c4-da33-4557-b019-18efe88914f5-kube-api-access-gh7t7\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.145132 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4s4\" (UniqueName: \"kubernetes.io/projected/5acf409c-da92-42ec-982a-b2d1f34be104-kube-api-access-fq4s4\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.399321 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6jln2" event={"ID":"7b7d2e95-2ad1-47c3-a97c-65d3821742b3","Type":"ContainerDied","Data":"cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.399399 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef8a9511d88e86a16192f69400aa42f421f73f260b0eb830047155942406b3e" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.399337 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6jln2" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.401658 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7c6d-account-create-76dsd" event={"ID":"607cc9c4-da33-4557-b019-18efe88914f5","Type":"ContainerDied","Data":"025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.401708 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025abd44793e4fb335d0b8352654bd9ca82b72e0ee7893c7a6885c3358c6fe7f" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.401680 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7c6d-account-create-76dsd" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.404041 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-698c-account-create-kdm74" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.404044 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-698c-account-create-kdm74" event={"ID":"e53c068b-3ea6-4b03-a740-e296a2f3f7e0","Type":"ContainerDied","Data":"69e9535d2a067e04ae7cb8832cec88fdcb5ebf044a9a4c69462f0b6fd039aa48"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.404107 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e9535d2a067e04ae7cb8832cec88fdcb5ebf044a9a4c69462f0b6fd039aa48" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.406087 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-d9f4t" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.406084 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-d9f4t" event={"ID":"5acf409c-da92-42ec-982a-b2d1f34be104","Type":"ContainerDied","Data":"974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.406248 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974679b1f2962cb84bdc57c55551b74a3f14960c1705e707a5e7ba7c38114d1e" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.408084 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ede5-account-create-4bcf7" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.408082 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ede5-account-create-4bcf7" event={"ID":"5f82633b-8229-49c8-92f3-4bffcd57f7ba","Type":"ContainerDied","Data":"899c409cae5213a7a5f5c6c58a77964b11e22c33ff09af37dd72adcd7670dd8c"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.408197 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="899c409cae5213a7a5f5c6c58a77964b11e22c33ff09af37dd72adcd7670dd8c" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.410023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q2dwl" event={"ID":"8229b191-56fe-4e64-8a62-1213c86a792c","Type":"ContainerDied","Data":"ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4"} Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.410058 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade3587bc211f710f2b39e2866c5515cdeb98573c36ed754e6c44b333f8ec7d4" Nov 24 17:07:42 crc kubenswrapper[4768]: I1124 17:07:42.410142 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q2dwl" Nov 24 17:07:43 crc kubenswrapper[4768]: I1124 17:07:43.010073 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:07:43 crc kubenswrapper[4768]: I1124 17:07:43.098524 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:43 crc kubenswrapper[4768]: I1124 17:07:43.098787 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-5dmkj" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="dnsmasq-dns" containerID="cri-o://8daa70df6296ba9edffd866aa7b2ba8a6f4f31b69a4bd2c9c3de4658c291a1f4" gracePeriod=10 Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.427513 4768 generic.go:334] "Generic (PLEG): container finished" podID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerID="8daa70df6296ba9edffd866aa7b2ba8a6f4f31b69a4bd2c9c3de4658c291a1f4" exitCode=0 Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.427553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5dmkj" event={"ID":"7771e669-ecdf-44a6-9e16-409eade01b8a","Type":"ContainerDied","Data":"8daa70df6296ba9edffd866aa7b2ba8a6f4f31b69a4bd2c9c3de4658c291a1f4"} Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.905825 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.990284 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb\") pod \"7771e669-ecdf-44a6-9e16-409eade01b8a\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.990328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config\") pod \"7771e669-ecdf-44a6-9e16-409eade01b8a\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.990374 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmkr\" (UniqueName: \"kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr\") pod \"7771e669-ecdf-44a6-9e16-409eade01b8a\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.990548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc\") pod \"7771e669-ecdf-44a6-9e16-409eade01b8a\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.990584 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb\") pod \"7771e669-ecdf-44a6-9e16-409eade01b8a\" (UID: \"7771e669-ecdf-44a6-9e16-409eade01b8a\") " Nov 24 17:07:44 crc kubenswrapper[4768]: I1124 17:07:44.997618 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr" (OuterVolumeSpecName: "kube-api-access-gtmkr") pod "7771e669-ecdf-44a6-9e16-409eade01b8a" (UID: "7771e669-ecdf-44a6-9e16-409eade01b8a"). InnerVolumeSpecName "kube-api-access-gtmkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.028176 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config" (OuterVolumeSpecName: "config") pod "7771e669-ecdf-44a6-9e16-409eade01b8a" (UID: "7771e669-ecdf-44a6-9e16-409eade01b8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.030604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7771e669-ecdf-44a6-9e16-409eade01b8a" (UID: "7771e669-ecdf-44a6-9e16-409eade01b8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.033874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7771e669-ecdf-44a6-9e16-409eade01b8a" (UID: "7771e669-ecdf-44a6-9e16-409eade01b8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.035187 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7771e669-ecdf-44a6-9e16-409eade01b8a" (UID: "7771e669-ecdf-44a6-9e16-409eade01b8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.092336 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.092384 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.092400 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.092413 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7771e669-ecdf-44a6-9e16-409eade01b8a-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.092424 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmkr\" (UniqueName: \"kubernetes.io/projected/7771e669-ecdf-44a6-9e16-409eade01b8a-kube-api-access-gtmkr\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.436583 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-5dmkj" event={"ID":"7771e669-ecdf-44a6-9e16-409eade01b8a","Type":"ContainerDied","Data":"28ea4b1f1d8811d338cd9f8aecaabdfb1f4aa28476ab68a0b646b9abd33f42f3"} Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.436642 4768 scope.go:117] "RemoveContainer" containerID="8daa70df6296ba9edffd866aa7b2ba8a6f4f31b69a4bd2c9c3de4658c291a1f4" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.436642 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-5dmkj" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.472277 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.475171 4768 scope.go:117] "RemoveContainer" containerID="fc5901e3b749a938c7ccf094a5707b58d7b8fb799061d02f5f92b96cf45110b5" Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.477422 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-5dmkj"] Nov 24 17:07:45 crc kubenswrapper[4768]: I1124 17:07:45.605198 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" path="/var/lib/kubelet/pods/7771e669-ecdf-44a6-9e16-409eade01b8a/volumes" Nov 24 17:07:49 crc kubenswrapper[4768]: I1124 17:07:49.480650 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2cnhg" event={"ID":"006943d1-b308-4fb2-8af1-b54310ff2deb","Type":"ContainerStarted","Data":"c0b186d9e208b809daec273f120a2e47feea0d97e7789b0fd792e936e59f4a3a"} Nov 24 17:07:49 crc kubenswrapper[4768]: I1124 17:07:49.501061 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2cnhg" podStartSLOduration=2.392872704 podStartE2EDuration="11.501044949s" podCreationTimestamp="2025-11-24 17:07:38 +0000 UTC" firstStartedPulling="2025-11-24 17:07:39.214706043 +0000 UTC m=+940.461674701" lastFinishedPulling="2025-11-24 17:07:48.322878288 +0000 UTC m=+949.569846946" observedRunningTime="2025-11-24 17:07:49.496001517 +0000 UTC m=+950.742970185" watchObservedRunningTime="2025-11-24 17:07:49.501044949 +0000 UTC m=+950.748013607" Nov 24 17:07:52 crc kubenswrapper[4768]: I1124 17:07:52.509099 4768 generic.go:334] "Generic (PLEG): container finished" podID="006943d1-b308-4fb2-8af1-b54310ff2deb" containerID="c0b186d9e208b809daec273f120a2e47feea0d97e7789b0fd792e936e59f4a3a" exitCode=0 Nov 24 17:07:52 crc kubenswrapper[4768]: I1124 17:07:52.509161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2cnhg" event={"ID":"006943d1-b308-4fb2-8af1-b54310ff2deb","Type":"ContainerDied","Data":"c0b186d9e208b809daec273f120a2e47feea0d97e7789b0fd792e936e59f4a3a"} Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.919820 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.948277 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b6m2\" (UniqueName: \"kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2\") pod \"006943d1-b308-4fb2-8af1-b54310ff2deb\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.948573 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle\") pod \"006943d1-b308-4fb2-8af1-b54310ff2deb\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.948618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data\") pod \"006943d1-b308-4fb2-8af1-b54310ff2deb\" (UID: \"006943d1-b308-4fb2-8af1-b54310ff2deb\") " Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.954534 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2" (OuterVolumeSpecName: "kube-api-access-2b6m2") pod "006943d1-b308-4fb2-8af1-b54310ff2deb" (UID: "006943d1-b308-4fb2-8af1-b54310ff2deb"). InnerVolumeSpecName "kube-api-access-2b6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:53 crc kubenswrapper[4768]: I1124 17:07:53.997421 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "006943d1-b308-4fb2-8af1-b54310ff2deb" (UID: "006943d1-b308-4fb2-8af1-b54310ff2deb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.003702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data" (OuterVolumeSpecName: "config-data") pod "006943d1-b308-4fb2-8af1-b54310ff2deb" (UID: "006943d1-b308-4fb2-8af1-b54310ff2deb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.050765 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b6m2\" (UniqueName: \"kubernetes.io/projected/006943d1-b308-4fb2-8af1-b54310ff2deb-kube-api-access-2b6m2\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.050813 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.050831 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/006943d1-b308-4fb2-8af1-b54310ff2deb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.530244 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2cnhg" event={"ID":"006943d1-b308-4fb2-8af1-b54310ff2deb","Type":"ContainerDied","Data":"2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103"} Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.530667 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a23c48f7ea43f8a770cb7597ed72f4ade9e754be07763875917b0eaab266103" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.530391 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2cnhg" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.829025 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jbrxv"] Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.833666 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="dnsmasq-dns" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.833929 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="dnsmasq-dns" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834018 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7d2e95-2ad1-47c3-a97c-65d3821742b3" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.834110 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7d2e95-2ad1-47c3-a97c-65d3821742b3" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834205 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="init" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.834290 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="init" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834397 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006943d1-b308-4fb2-8af1-b54310ff2deb" containerName="keystone-db-sync" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.834489 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="006943d1-b308-4fb2-8af1-b54310ff2deb" containerName="keystone-db-sync" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834578 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53c068b-3ea6-4b03-a740-e296a2f3f7e0" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.834663 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53c068b-3ea6-4b03-a740-e296a2f3f7e0" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834801 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8229b191-56fe-4e64-8a62-1213c86a792c" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.834888 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8229b191-56fe-4e64-8a62-1213c86a792c" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.834976 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f82633b-8229-49c8-92f3-4bffcd57f7ba" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835054 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f82633b-8229-49c8-92f3-4bffcd57f7ba" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.835150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5acf409c-da92-42ec-982a-b2d1f34be104" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835223 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5acf409c-da92-42ec-982a-b2d1f34be104" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: E1124 17:07:54.835309 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607cc9c4-da33-4557-b019-18efe88914f5" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835412 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="607cc9c4-da33-4557-b019-18efe88914f5" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835736 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f82633b-8229-49c8-92f3-4bffcd57f7ba" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835831 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7d2e95-2ad1-47c3-a97c-65d3821742b3" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835912 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="607cc9c4-da33-4557-b019-18efe88914f5" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.835996 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53c068b-3ea6-4b03-a740-e296a2f3f7e0" containerName="mariadb-account-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.836081 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5acf409c-da92-42ec-982a-b2d1f34be104" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.836164 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8229b191-56fe-4e64-8a62-1213c86a792c" containerName="mariadb-database-create" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.836252 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="006943d1-b308-4fb2-8af1-b54310ff2deb" containerName="keystone-db-sync" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.836340 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7771e669-ecdf-44a6-9e16-409eade01b8a" containerName="dnsmasq-dns" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.837115 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.842452 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.842593 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.843150 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.843937 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hz2rd" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.844109 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.846080 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.848429 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.872243 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jbrxv"] Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884248 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbzj\" (UniqueName: \"kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884357 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884390 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884414 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884447 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv5vs\" (UniqueName: \"kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.884532 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.886413 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.982880 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-zcgvl"] Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.984164 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.985550 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.985668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.985745 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv5vs\" (UniqueName: \"kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.985838 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.985948 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdbzj\" (UniqueName: \"kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986113 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986195 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986363 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.986563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.993616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.993862 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.994060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.994454 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.994517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.994528 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:54 crc kubenswrapper[4768]: I1124 17:07:54.995148 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.000728 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.005591 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-zcgvl"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.008025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.011786 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.032709 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdbzj\" (UniqueName: \"kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj\") pod \"keystone-bootstrap-jbrxv\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.033670 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv5vs\" (UniqueName: \"kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs\") pod \"dnsmasq-dns-847c4cc679-k6c5s\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.067866 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-c555-account-create-x2sqr"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.070387 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.071970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.091686 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjm9m\" (UniqueName: \"kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.091737 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.094303 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.097447 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.104911 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.106608 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.161054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.162134 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.175569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199501 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199581 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97r92\" (UniqueName: \"kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjm9m\" (UniqueName: \"kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199713 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.199768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wns\" (UniqueName: \"kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.202846 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6qc9l"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.212680 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.221758 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.231482 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c555-account-create-x2sqr"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.236843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjm9m\" (UniqueName: \"kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m\") pod \"ironic-db-create-zcgvl\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.246541 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.246767 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5c5z8" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.267184 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-w8vsq"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.268782 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.276061 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.296052 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l5fgx" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.296194 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97r92\" (UniqueName: \"kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305512 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xwn\" (UniqueName: \"kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305707 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wns\" (UniqueName: \"kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305745 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhp7l\" (UniqueName: \"kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305792 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305810 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305829 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.305859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.306362 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.315094 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.315409 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.329293 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.329372 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w8vsq"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.337117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.338057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.351035 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.356336 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97r92\" (UniqueName: \"kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92\") pod \"ceilometer-0\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.356797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wns\" (UniqueName: \"kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns\") pod \"ironic-c555-account-create-x2sqr\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.368922 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6qc9l"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.389920 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408781 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhp7l\" (UniqueName: \"kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.408984 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.409005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xwn\" (UniqueName: \"kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.418136 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.423271 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.426505 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.426824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.428252 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.439850 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.440372 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.440376 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.455385 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.476627 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cntsw"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.477805 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.502228 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.502528 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-z4qzl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.502713 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.512617 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xwn\" (UniqueName: \"kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn\") pod \"cinder-db-sync-w8vsq\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.522047 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhp7l\" (UniqueName: \"kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l\") pod \"barbican-db-sync-6qc9l\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.532431 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.532606 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhvh\" (UniqueName: \"kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.532736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.633428 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cntsw"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.649756 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-v5vq6"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.657143 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.674586 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bclrl" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.674752 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.674866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.674931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhvh\" (UniqueName: \"kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.674995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.681319 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.681607 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.693598 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.702643 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v5vq6"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.707163 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.713002 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.719055 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.720399 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.724392 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhvh\" (UniqueName: \"kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh\") pod \"neutron-db-sync-cntsw\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.730162 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.734445 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.782611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.782735 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5t4\" (UniqueName: \"kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.782790 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.782846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.782900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.881736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cntsw" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5t4\" (UniqueName: \"kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884659 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884737 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884773 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884793 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nchm\" (UniqueName: \"kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.884890 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.892715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.892826 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.893277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.893407 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.907175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5t4\" (UniqueName: \"kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4\") pod \"placement-db-sync-v5vq6\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.969457 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.970892 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.974191 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nwmdg" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.974429 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.975003 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.975164 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.981030 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987737 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987781 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987880 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nchm\" (UniqueName: \"kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.987923 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.988844 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.989344 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.989847 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.990472 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:55 crc kubenswrapper[4768]: I1124 17:07:55.990963 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.010443 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nchm\" (UniqueName: \"kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm\") pod \"dnsmasq-dns-785d8bcb8c-rtr97\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.046425 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5vq6" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.062023 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.078479 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.079897 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.087353 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.087803 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.089911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.089976 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090101 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090164 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090234 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkxt\" (UniqueName: \"kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.090256 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.103192 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.112556 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jbrxv"] Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191631 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191676 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.191825 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192064 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192103 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192119 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64w8d\" (UniqueName: \"kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlkxt\" (UniqueName: \"kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.192221 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.193681 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.194015 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.194482 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.202496 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.202677 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.205075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.224381 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlkxt\" (UniqueName: \"kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.228184 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.241828 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.293684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294196 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294214 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64w8d\" (UniqueName: \"kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294260 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.294388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.298883 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.299226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.301542 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.302287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.305271 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.308109 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.325196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64w8d\" (UniqueName: \"kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.331668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: W1124 17:07:56.489480 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8334061_2f24_4d34_a921_10d05dd32ec7.slice/crio-c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3 WatchSource:0}: Error finding container c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3: Status 404 returned error can't find the container with id c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3 Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.492963 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:56 crc kubenswrapper[4768]: W1124 17:07:56.499869 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1869f53_e1c3_4194_a66f_8d16238e0fe3.slice/crio-cc6266c404d4ff4fab4b55436adb1a0827accbd88dfaf438db2e14f7edcb050e WatchSource:0}: Error finding container cc6266c404d4ff4fab4b55436adb1a0827accbd88dfaf438db2e14f7edcb050e: Status 404 returned error can't find the container with id cc6266c404d4ff4fab4b55436adb1a0827accbd88dfaf438db2e14f7edcb050e Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.516862 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-zcgvl"] Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.528596 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.538104 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:07:56 crc kubenswrapper[4768]: W1124 17:07:56.543218 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c WatchSource:0}: Error finding container b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c: Status 404 returned error can't find the container with id b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c Nov 24 17:07:56 crc kubenswrapper[4768]: W1124 17:07:56.544515 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56d73241_8027_4861_83ae_a766feceadd2.slice/crio-bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319 WatchSource:0}: Error finding container bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319: Status 404 returned error can't find the container with id bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319 Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.545608 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c555-account-create-x2sqr"] Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.547589 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:07:56 crc kubenswrapper[4768]: I1124 17:07:56.555768 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6qc9l"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.708711 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cntsw"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.721679 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6qc9l" event={"ID":"56d73241-8027-4861-83ae-a766feceadd2","Type":"ContainerStarted","Data":"bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.728908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c555-account-create-x2sqr" event={"ID":"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b","Type":"ContainerStarted","Data":"b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.731051 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-w8vsq"] Nov 24 17:07:57 crc kubenswrapper[4768]: W1124 17:07:56.740604 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b5f872_0b2d_4937_bc26_dac18713087f.slice/crio-5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8 WatchSource:0}: Error finding container 5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8: Status 404 returned error can't find the container with id 5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.740929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jbrxv" event={"ID":"e0947813-175a-4246-acdb-53b09311ab93","Type":"ContainerStarted","Data":"9020b83ecd6f8712aac189211a99ec31f6291489c38a56d1fe94ef174a6bba28"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.740964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jbrxv" event={"ID":"e0947813-175a-4246-acdb-53b09311ab93","Type":"ContainerStarted","Data":"881e360af79bb179392d25e90e1b26ff6bae038acb08c2aff8985e5b8173da7d"} Nov 24 17:07:57 crc kubenswrapper[4768]: W1124 17:07:56.744645 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632b579a_27e1_4431_a7ad_32631cf804b6.slice/crio-63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a WatchSource:0}: Error finding container 63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a: Status 404 returned error can't find the container with id 63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.747272 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-zcgvl" event={"ID":"e8334061-2f24-4d34-a921-10d05dd32ec7","Type":"ContainerStarted","Data":"c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.750418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" event={"ID":"fd80b446-d807-41b1-89a5-857f3ba03729","Type":"ContainerStarted","Data":"839399e8d33caadba012fc952c4449ae8ced984bfa68085bc255b1216d02ab77"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.754679 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerStarted","Data":"cc6266c404d4ff4fab4b55436adb1a0827accbd88dfaf438db2e14f7edcb050e"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.771676 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jbrxv" podStartSLOduration=2.77087665 podStartE2EDuration="2.77087665s" podCreationTimestamp="2025-11-24 17:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:56.760399394 +0000 UTC m=+958.007368052" watchObservedRunningTime="2025-11-24 17:07:56.77087665 +0000 UTC m=+958.017845308" Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.854869 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-v5vq6"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.891015 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:07:57 crc kubenswrapper[4768]: W1124 17:07:56.905786 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59eb907c_4af0_495b_9885_b144bc2d611d.slice/crio-41d14101d585badf6d08505ed96952020e424045475b55ae9fc35a4efaa12ad2 WatchSource:0}: Error finding container 41d14101d585badf6d08505ed96952020e424045475b55ae9fc35a4efaa12ad2: Status 404 returned error can't find the container with id 41d14101d585badf6d08505ed96952020e424045475b55ae9fc35a4efaa12ad2 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:56.991049 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.111578 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.627542 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.816834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8vsq" event={"ID":"632b579a-27e1-4431-a7ad-32631cf804b6","Type":"ContainerStarted","Data":"63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.828049 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.828467 4768 generic.go:334] "Generic (PLEG): container finished" podID="e8334061-2f24-4d34-a921-10d05dd32ec7" containerID="a0aeeeb1b45f605a1fe9f36b1d8f7305e727e5c21ab06523d10f0e2164965154" exitCode=0 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.828642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-zcgvl" event={"ID":"e8334061-2f24-4d34-a921-10d05dd32ec7","Type":"ContainerDied","Data":"a0aeeeb1b45f605a1fe9f36b1d8f7305e727e5c21ab06523d10f0e2164965154"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.835723 4768 generic.go:334] "Generic (PLEG): container finished" podID="59eb907c-4af0-495b-9885-b144bc2d611d" containerID="6dd86d526917d2b075e0c5cc81dc33821cb98ae8c4f0120fc1d17727b92bbc34" exitCode=0 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.835795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" event={"ID":"59eb907c-4af0-495b-9885-b144bc2d611d","Type":"ContainerDied","Data":"6dd86d526917d2b075e0c5cc81dc33821cb98ae8c4f0120fc1d17727b92bbc34"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.835825 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" event={"ID":"59eb907c-4af0-495b-9885-b144bc2d611d","Type":"ContainerStarted","Data":"41d14101d585badf6d08505ed96952020e424045475b55ae9fc35a4efaa12ad2"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.852610 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5vq6" event={"ID":"a25ecf7c-a4b8-40e9-97b1-2b52c3094474","Type":"ContainerStarted","Data":"d30828a170cb5cc651e2a844f16a1420884cda98f50854f7d4b62074d505731a"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.860323 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cntsw" event={"ID":"14b5f872-0b2d-4937-bc26-dac18713087f","Type":"ContainerStarted","Data":"423e81fee4762b3ef15b48c3fb59a65136bf15c6b1423b40632346b8636a4462"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.860390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cntsw" event={"ID":"14b5f872-0b2d-4937-bc26-dac18713087f","Type":"ContainerStarted","Data":"5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.863003 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd80b446-d807-41b1-89a5-857f3ba03729" containerID="fecd18cfb5de542056cf1c2f7ffb8b46772615e830c9b60087515abc6d95afda" exitCode=0 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.863050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" event={"ID":"fd80b446-d807-41b1-89a5-857f3ba03729","Type":"ContainerDied","Data":"fecd18cfb5de542056cf1c2f7ffb8b46772615e830c9b60087515abc6d95afda"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.875789 4768 generic.go:334] "Generic (PLEG): container finished" podID="2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" containerID="713baed01a67c2d3ed923ff5fd48259de70ec823325ba09c324af3a92924ae23" exitCode=0 Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.877140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c555-account-create-x2sqr" event={"ID":"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b","Type":"ContainerDied","Data":"713baed01a67c2d3ed923ff5fd48259de70ec823325ba09c324af3a92924ae23"} Nov 24 17:07:57 crc kubenswrapper[4768]: I1124 17:07:57.940567 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cntsw" podStartSLOduration=2.940546993 podStartE2EDuration="2.940546993s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:57.925893329 +0000 UTC m=+959.172861987" watchObservedRunningTime="2025-11-24 17:07:57.940546993 +0000 UTC m=+959.187515651" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.053539 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.425039 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.442703 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.442766 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.442805 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.442860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.442923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv5vs\" (UniqueName: \"kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.443036 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config\") pod \"fd80b446-d807-41b1-89a5-857f3ba03729\" (UID: \"fd80b446-d807-41b1-89a5-857f3ba03729\") " Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.456021 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs" (OuterVolumeSpecName: "kube-api-access-kv5vs") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "kube-api-access-kv5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.472188 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.476663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.485349 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.490808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config" (OuterVolumeSpecName: "config") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.500727 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd80b446-d807-41b1-89a5-857f3ba03729" (UID: "fd80b446-d807-41b1-89a5-857f3ba03729"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545446 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545479 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545490 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545500 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545509 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv5vs\" (UniqueName: \"kubernetes.io/projected/fd80b446-d807-41b1-89a5-857f3ba03729-kube-api-access-kv5vs\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.545516 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd80b446-d807-41b1-89a5-857f3ba03729-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.891479 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerStarted","Data":"d5d1e57c005d856c6c3ced0c5ccfdc28c84864b8982c5c1420650d97f09be587"} Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.896444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" event={"ID":"59eb907c-4af0-495b-9885-b144bc2d611d","Type":"ContainerStarted","Data":"5a2917a24aea688028ce8529ae09e36ec0e1b098c10c7697bcb134920bd1cfb1"} Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.896623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.899945 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" event={"ID":"fd80b446-d807-41b1-89a5-857f3ba03729","Type":"ContainerDied","Data":"839399e8d33caadba012fc952c4449ae8ced984bfa68085bc255b1216d02ab77"} Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.900003 4768 scope.go:117] "RemoveContainer" containerID="fecd18cfb5de542056cf1c2f7ffb8b46772615e830c9b60087515abc6d95afda" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.900132 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-k6c5s" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.902576 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerStarted","Data":"807e2d990a76608b4d1b009ae74985c1fa6ebc4cc59415783d66082f57504381"} Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.917892 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" podStartSLOduration=3.917872839 podStartE2EDuration="3.917872839s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:07:58.912631481 +0000 UTC m=+960.159600159" watchObservedRunningTime="2025-11-24 17:07:58.917872839 +0000 UTC m=+960.164841497" Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.983091 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:58 crc kubenswrapper[4768]: I1124 17:07:58.983689 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-k6c5s"] Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.399903 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.420610 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-zcgvl" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.477415 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8wns\" (UniqueName: \"kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns\") pod \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.477476 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts\") pod \"e8334061-2f24-4d34-a921-10d05dd32ec7\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.477712 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjm9m\" (UniqueName: \"kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m\") pod \"e8334061-2f24-4d34-a921-10d05dd32ec7\" (UID: \"e8334061-2f24-4d34-a921-10d05dd32ec7\") " Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.477774 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts\") pod \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\" (UID: \"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b\") " Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.478689 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8334061-2f24-4d34-a921-10d05dd32ec7" (UID: "e8334061-2f24-4d34-a921-10d05dd32ec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.479857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" (UID: "2b5ab834-a98f-4ace-a22f-cde15ebf7f4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.484457 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m" (OuterVolumeSpecName: "kube-api-access-rjm9m") pod "e8334061-2f24-4d34-a921-10d05dd32ec7" (UID: "e8334061-2f24-4d34-a921-10d05dd32ec7"). InnerVolumeSpecName "kube-api-access-rjm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.484590 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns" (OuterVolumeSpecName: "kube-api-access-v8wns") pod "2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" (UID: "2b5ab834-a98f-4ace-a22f-cde15ebf7f4b"). InnerVolumeSpecName "kube-api-access-v8wns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.588553 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjm9m\" (UniqueName: \"kubernetes.io/projected/e8334061-2f24-4d34-a921-10d05dd32ec7-kube-api-access-rjm9m\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.588887 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.588966 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8wns\" (UniqueName: \"kubernetes.io/projected/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b-kube-api-access-v8wns\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.589037 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8334061-2f24-4d34-a921-10d05dd32ec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.630603 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd80b446-d807-41b1-89a5-857f3ba03729" path="/var/lib/kubelet/pods/fd80b446-d807-41b1-89a5-857f3ba03729/volumes" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.919818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerStarted","Data":"494a1198bf84c3c79246dc817d20d890f516635df40a97fe4862160bd4e78880"} Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.924187 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c555-account-create-x2sqr" event={"ID":"2b5ab834-a98f-4ace-a22f-cde15ebf7f4b","Type":"ContainerDied","Data":"b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c"} Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.924229 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.924468 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c555-account-create-x2sqr" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.934140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerStarted","Data":"f5b1ad6f2eeeb44a033e1273cfebab172d3ddc95b77c834d2a8746a6be9312a1"} Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.936005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-zcgvl" event={"ID":"e8334061-2f24-4d34-a921-10d05dd32ec7","Type":"ContainerDied","Data":"c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3"} Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.936057 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4dfff7088a65c217898bc47848fdc8e6967deafd50103fe9424d19d537c6ee3" Nov 24 17:07:59 crc kubenswrapper[4768]: I1124 17:07:59.936024 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-zcgvl" Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.946071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerStarted","Data":"3340a8865e8c71575947d1a306273686ce9113959f8276fdaa4b6d9b532cf5ea"} Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.946176 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-log" containerID="cri-o://f5b1ad6f2eeeb44a033e1273cfebab172d3ddc95b77c834d2a8746a6be9312a1" gracePeriod=30 Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.946222 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-httpd" containerID="cri-o://3340a8865e8c71575947d1a306273686ce9113959f8276fdaa4b6d9b532cf5ea" gracePeriod=30 Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.948565 4768 generic.go:334] "Generic (PLEG): container finished" podID="e0947813-175a-4246-acdb-53b09311ab93" containerID="9020b83ecd6f8712aac189211a99ec31f6291489c38a56d1fe94ef174a6bba28" exitCode=0 Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.948663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jbrxv" event={"ID":"e0947813-175a-4246-acdb-53b09311ab93","Type":"ContainerDied","Data":"9020b83ecd6f8712aac189211a99ec31f6291489c38a56d1fe94ef174a6bba28"} Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.951450 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerStarted","Data":"034b8af04a1e0c566be704ab00b23ea81af183e7774ac45c861826bee16b9a3a"} Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.951575 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-log" containerID="cri-o://494a1198bf84c3c79246dc817d20d890f516635df40a97fe4862160bd4e78880" gracePeriod=30 Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.951726 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-httpd" containerID="cri-o://034b8af04a1e0c566be704ab00b23ea81af183e7774ac45c861826bee16b9a3a" gracePeriod=30 Nov 24 17:08:00 crc kubenswrapper[4768]: I1124 17:08:00.971514 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.971496737 podStartE2EDuration="5.971496737s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:00.969230123 +0000 UTC m=+962.216198781" watchObservedRunningTime="2025-11-24 17:08:00.971496737 +0000 UTC m=+962.218465395" Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.020888 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.020865502 podStartE2EDuration="7.020865502s" podCreationTimestamp="2025-11-24 17:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:01.015753418 +0000 UTC m=+962.262722076" watchObservedRunningTime="2025-11-24 17:08:01.020865502 +0000 UTC m=+962.267834150" Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.983089 4768 generic.go:334] "Generic (PLEG): container finished" podID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerID="034b8af04a1e0c566be704ab00b23ea81af183e7774ac45c861826bee16b9a3a" exitCode=0 Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.983121 4768 generic.go:334] "Generic (PLEG): container finished" podID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerID="494a1198bf84c3c79246dc817d20d890f516635df40a97fe4862160bd4e78880" exitCode=143 Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.983163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerDied","Data":"034b8af04a1e0c566be704ab00b23ea81af183e7774ac45c861826bee16b9a3a"} Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.983191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerDied","Data":"494a1198bf84c3c79246dc817d20d890f516635df40a97fe4862160bd4e78880"} Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.985799 4768 generic.go:334] "Generic (PLEG): container finished" podID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerID="3340a8865e8c71575947d1a306273686ce9113959f8276fdaa4b6d9b532cf5ea" exitCode=0 Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.985845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerDied","Data":"3340a8865e8c71575947d1a306273686ce9113959f8276fdaa4b6d9b532cf5ea"} Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.985875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerDied","Data":"f5b1ad6f2eeeb44a033e1273cfebab172d3ddc95b77c834d2a8746a6be9312a1"} Nov 24 17:08:01 crc kubenswrapper[4768]: I1124 17:08:01.985855 4768 generic.go:334] "Generic (PLEG): container finished" podID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerID="f5b1ad6f2eeeb44a033e1273cfebab172d3ddc95b77c834d2a8746a6be9312a1" exitCode=143 Nov 24 17:08:03 crc kubenswrapper[4768]: E1124 17:08:03.557728 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.345556 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-jtdld"] Nov 24 17:08:05 crc kubenswrapper[4768]: E1124 17:08:05.346293 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8334061-2f24-4d34-a921-10d05dd32ec7" containerName="mariadb-database-create" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346312 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8334061-2f24-4d34-a921-10d05dd32ec7" containerName="mariadb-database-create" Nov 24 17:08:05 crc kubenswrapper[4768]: E1124 17:08:05.346360 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd80b446-d807-41b1-89a5-857f3ba03729" containerName="init" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346367 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd80b446-d807-41b1-89a5-857f3ba03729" containerName="init" Nov 24 17:08:05 crc kubenswrapper[4768]: E1124 17:08:05.346378 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" containerName="mariadb-account-create" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346384 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" containerName="mariadb-account-create" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346545 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" containerName="mariadb-account-create" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346563 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd80b446-d807-41b1-89a5-857f3ba03729" containerName="init" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.346576 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8334061-2f24-4d34-a921-10d05dd32ec7" containerName="mariadb-database-create" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.347406 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.349793 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.350100 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.352467 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-b4cxm" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.375613 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jtdld"] Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409281 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsdw\" (UniqueName: \"kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409380 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409426 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.409928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.511845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.511910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsdw\" (UniqueName: \"kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.511945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.511973 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.512062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.512125 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.517866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.518110 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.521951 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.522530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.525065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.537206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsdw\" (UniqueName: \"kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw\") pod \"ironic-db-sync-jtdld\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.623283 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.664179 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.716860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.716925 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.716947 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.716990 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdbzj\" (UniqueName: \"kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.717020 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.717073 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data\") pod \"e0947813-175a-4246-acdb-53b09311ab93\" (UID: \"e0947813-175a-4246-acdb-53b09311ab93\") " Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.722659 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj" (OuterVolumeSpecName: "kube-api-access-kdbzj") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "kube-api-access-kdbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.722690 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.724412 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts" (OuterVolumeSpecName: "scripts") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.725631 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.750567 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data" (OuterVolumeSpecName: "config-data") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.779013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0947813-175a-4246-acdb-53b09311ab93" (UID: "e0947813-175a-4246-acdb-53b09311ab93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.822821 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.823284 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.823446 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.823530 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdbzj\" (UniqueName: \"kubernetes.io/projected/e0947813-175a-4246-acdb-53b09311ab93-kube-api-access-kdbzj\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.823595 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:05 crc kubenswrapper[4768]: I1124 17:08:05.823658 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0947813-175a-4246-acdb-53b09311ab93-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.029902 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jbrxv" event={"ID":"e0947813-175a-4246-acdb-53b09311ab93","Type":"ContainerDied","Data":"881e360af79bb179392d25e90e1b26ff6bae038acb08c2aff8985e5b8173da7d"} Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.029951 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881e360af79bb179392d25e90e1b26ff6bae038acb08c2aff8985e5b8173da7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.029954 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jbrxv" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.064456 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.119386 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.119676 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" containerID="cri-o://e6bb1e1a1b7e61de94bb06703b58b69677a946f38e0448fd0c8a6340c64f9f1d" gracePeriod=10 Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.716378 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jbrxv"] Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.725928 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jbrxv"] Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.798400 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rdt7d"] Nov 24 17:08:06 crc kubenswrapper[4768]: E1124 17:08:06.798892 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0947813-175a-4246-acdb-53b09311ab93" containerName="keystone-bootstrap" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.798917 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0947813-175a-4246-acdb-53b09311ab93" containerName="keystone-bootstrap" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.799112 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0947813-175a-4246-acdb-53b09311ab93" containerName="keystone-bootstrap" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.799920 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.812312 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rdt7d"] Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.833606 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hz2rd" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.833816 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.833941 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.834033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.842238 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.842494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.842606 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.842734 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.842896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.843021 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwgm\" (UniqueName: \"kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944616 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944678 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944712 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.944741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwgm\" (UniqueName: \"kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.949096 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.949272 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.950333 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.950990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.952020 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:06 crc kubenswrapper[4768]: I1124 17:08:06.960548 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwgm\" (UniqueName: \"kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm\") pod \"keystone-bootstrap-rdt7d\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.040326 4768 generic.go:334] "Generic (PLEG): container finished" podID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerID="e6bb1e1a1b7e61de94bb06703b58b69677a946f38e0448fd0c8a6340c64f9f1d" exitCode=0 Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.040389 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" event={"ID":"76b0f1d1-484b-4959-963f-35a843f11fcc","Type":"ContainerDied","Data":"e6bb1e1a1b7e61de94bb06703b58b69677a946f38e0448fd0c8a6340c64f9f1d"} Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.153843 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.591129 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.596160 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0947813-175a-4246-acdb-53b09311ab93" path="/var/lib/kubelet/pods/e0947813-175a-4246-acdb-53b09311ab93/volumes" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.659696 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.659811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.659859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.659914 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.659992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.660054 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.660088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.660108 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64w8d\" (UniqueName: \"kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d\") pod \"3f329092-4d3a-424f-9f41-6b25ad9fe381\" (UID: \"3f329092-4d3a-424f-9f41-6b25ad9fe381\") " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.661739 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.661824 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs" (OuterVolumeSpecName: "logs") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.665740 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d" (OuterVolumeSpecName: "kube-api-access-64w8d") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "kube-api-access-64w8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.680249 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.680273 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts" (OuterVolumeSpecName: "scripts") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.696656 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.703641 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data" (OuterVolumeSpecName: "config-data") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.718068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3f329092-4d3a-424f-9f41-6b25ad9fe381" (UID: "3f329092-4d3a-424f-9f41-6b25ad9fe381"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762498 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762528 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762538 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f329092-4d3a-424f-9f41-6b25ad9fe381-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762547 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762556 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762567 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762575 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64w8d\" (UniqueName: \"kubernetes.io/projected/3f329092-4d3a-424f-9f41-6b25ad9fe381-kube-api-access-64w8d\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.762583 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f329092-4d3a-424f-9f41-6b25ad9fe381-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.781288 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 17:08:07 crc kubenswrapper[4768]: I1124 17:08:07.864421 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.009440 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.069386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f329092-4d3a-424f-9f41-6b25ad9fe381","Type":"ContainerDied","Data":"d5d1e57c005d856c6c3ced0c5ccfdc28c84864b8982c5c1420650d97f09be587"} Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.069447 4768 scope.go:117] "RemoveContainer" containerID="3340a8865e8c71575947d1a306273686ce9113959f8276fdaa4b6d9b532cf5ea" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.069463 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.103199 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.109220 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.123460 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:08:08 crc kubenswrapper[4768]: E1124 17:08:08.123806 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-log" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.123818 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-log" Nov 24 17:08:08 crc kubenswrapper[4768]: E1124 17:08:08.123836 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-httpd" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.123842 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-httpd" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.124101 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-httpd" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.124132 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" containerName="glance-log" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.125064 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.131984 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.132258 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.162825 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171455 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171541 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171680 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x49p\" (UniqueName: \"kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171744 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171769 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.171952 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273731 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x49p\" (UniqueName: \"kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273839 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273862 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273907 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.273945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.274180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.274212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.274315 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.278081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.278199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.278622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.290925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.295298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x49p\" (UniqueName: \"kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.305517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:08:08 crc kubenswrapper[4768]: I1124 17:08:08.448238 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:09 crc kubenswrapper[4768]: I1124 17:08:09.593847 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f329092-4d3a-424f-9f41-6b25ad9fe381" path="/var/lib/kubelet/pods/3f329092-4d3a-424f-9f41-6b25ad9fe381/volumes" Nov 24 17:08:13 crc kubenswrapper[4768]: I1124 17:08:13.009366 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Nov 24 17:08:13 crc kubenswrapper[4768]: E1124 17:08:13.763223 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:17 crc kubenswrapper[4768]: E1124 17:08:17.556567 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 17:08:17 crc kubenswrapper[4768]: E1124 17:08:17.557439 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhp7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6qc9l_openstack(56d73241-8027-4861-83ae-a766feceadd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:08:17 crc kubenswrapper[4768]: E1124 17:08:17.558660 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6qc9l" podUID="56d73241-8027-4861-83ae-a766feceadd2" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.664559 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.767693 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768074 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768113 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768170 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768243 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768291 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768396 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlkxt\" (UniqueName: \"kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt\") pod \"72d79e4c-84ab-49d2-a162-6e9d595145bb\" (UID: \"72d79e4c-84ab-49d2-a162-6e9d595145bb\") " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768527 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs" (OuterVolumeSpecName: "logs") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.768991 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.769013 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72d79e4c-84ab-49d2-a162-6e9d595145bb-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.773831 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.774204 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts" (OuterVolumeSpecName: "scripts") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.774232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt" (OuterVolumeSpecName: "kube-api-access-jlkxt") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "kube-api-access-jlkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.795651 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.823635 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data" (OuterVolumeSpecName: "config-data") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.836652 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72d79e4c-84ab-49d2-a162-6e9d595145bb" (UID: "72d79e4c-84ab-49d2-a162-6e9d595145bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870639 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870680 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870696 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlkxt\" (UniqueName: \"kubernetes.io/projected/72d79e4c-84ab-49d2-a162-6e9d595145bb-kube-api-access-jlkxt\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870708 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870720 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72d79e4c-84ab-49d2-a162-6e9d595145bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.870753 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.891935 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 17:08:17 crc kubenswrapper[4768]: I1124 17:08:17.972120 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.160199 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.162998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"72d79e4c-84ab-49d2-a162-6e9d595145bb","Type":"ContainerDied","Data":"807e2d990a76608b4d1b009ae74985c1fa6ebc4cc59415783d66082f57504381"} Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.164021 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6qc9l" podUID="56d73241-8027-4861-83ae-a766feceadd2" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.207581 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.222280 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.236712 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.237182 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-log" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.237196 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-log" Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.237207 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-httpd" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.237215 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-httpd" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.237436 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-httpd" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.237449 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" containerName="glance-log" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.238522 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.242292 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.244944 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.245257 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7dk\" (UniqueName: \"kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.276955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378272 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378287 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7dk\" (UniqueName: \"kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.378447 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.379224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.379335 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.379345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.383938 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.384210 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.384417 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.386144 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.397178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7dk\" (UniqueName: \"kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.422361 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.569472 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.816733 4768 scope.go:117] "RemoveContainer" containerID="f5b1ad6f2eeeb44a033e1273cfebab172d3ddc95b77c834d2a8746a6be9312a1" Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.823142 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.823297 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92xwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-w8vsq_openstack(632b579a-27e1-4431-a7ad-32631cf804b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 17:08:18 crc kubenswrapper[4768]: E1124 17:08:18.824696 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-w8vsq" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.945612 4768 scope.go:117] "RemoveContainer" containerID="034b8af04a1e0c566be704ab00b23ea81af183e7774ac45c861826bee16b9a3a" Nov 24 17:08:18 crc kubenswrapper[4768]: I1124 17:08:18.947265 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.989845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.989932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9bpv\" (UniqueName: \"kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.989971 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.990033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.990053 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:18.990105 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config\") pod \"76b0f1d1-484b-4959-963f-35a843f11fcc\" (UID: \"76b0f1d1-484b-4959-963f-35a843f11fcc\") " Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.038120 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv" (OuterVolumeSpecName: "kube-api-access-p9bpv") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "kube-api-access-p9bpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.067801 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.071145 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.076013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.082985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config" (OuterVolumeSpecName: "config") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.092266 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.092302 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.092311 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.092321 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9bpv\" (UniqueName: \"kubernetes.io/projected/76b0f1d1-484b-4959-963f-35a843f11fcc-kube-api-access-p9bpv\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.092364 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.098281 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76b0f1d1-484b-4959-963f-35a843f11fcc" (UID: "76b0f1d1-484b-4959-963f-35a843f11fcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.099808 4768 scope.go:117] "RemoveContainer" containerID="494a1198bf84c3c79246dc817d20d890f516635df40a97fe4862160bd4e78880" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.170732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" event={"ID":"76b0f1d1-484b-4959-963f-35a843f11fcc","Type":"ContainerDied","Data":"df228b4cd09f256279bbe3ae0002a5a709d5c3bbe7920b809a9fd04f2e668e60"} Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.171101 4768 scope.go:117] "RemoveContainer" containerID="e6bb1e1a1b7e61de94bb06703b58b69677a946f38e0448fd0c8a6340c64f9f1d" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.170966 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.191931 4768 scope.go:117] "RemoveContainer" containerID="45a3afd5afab84eb01771e39cfc50869f354d26f81367aad5e74e09aa642e1d7" Nov 24 17:08:19 crc kubenswrapper[4768]: E1124 17:08:19.192274 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-w8vsq" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.193563 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76b0f1d1-484b-4959-963f-35a843f11fcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.216719 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.224561 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gnsqw"] Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.410117 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rdt7d"] Nov 24 17:08:19 crc kubenswrapper[4768]: W1124 17:08:19.411832 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07b29f36_8738_4aff_b55f_9bf0ce77e344.slice/crio-7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b WatchSource:0}: Error finding container 7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b: Status 404 returned error can't find the container with id 7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.428076 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-jtdld"] Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.491417 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.600411 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d79e4c-84ab-49d2-a162-6e9d595145bb" path="/var/lib/kubelet/pods/72d79e4c-84ab-49d2-a162-6e9d595145bb/volumes" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.601684 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" path="/var/lib/kubelet/pods/76b0f1d1-484b-4959-963f-35a843f11fcc/volumes" Nov 24 17:08:19 crc kubenswrapper[4768]: I1124 17:08:19.654960 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:08:19 crc kubenswrapper[4768]: W1124 17:08:19.659490 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21ec6fe8_8b5a_4ebd_89a9_459fd8f109d7.slice/crio-65f0f8155e77589ab96edf50b44dd24bb2a5e1390d2dfdc7fc943def4e64f7c7 WatchSource:0}: Error finding container 65f0f8155e77589ab96edf50b44dd24bb2a5e1390d2dfdc7fc943def4e64f7c7: Status 404 returned error can't find the container with id 65f0f8155e77589ab96edf50b44dd24bb2a5e1390d2dfdc7fc943def4e64f7c7 Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.195103 4768 generic.go:334] "Generic (PLEG): container finished" podID="14b5f872-0b2d-4937-bc26-dac18713087f" containerID="423e81fee4762b3ef15b48c3fb59a65136bf15c6b1423b40632346b8636a4462" exitCode=0 Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.195423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cntsw" event={"ID":"14b5f872-0b2d-4937-bc26-dac18713087f","Type":"ContainerDied","Data":"423e81fee4762b3ef15b48c3fb59a65136bf15c6b1423b40632346b8636a4462"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.197276 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerStarted","Data":"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.198682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jtdld" event={"ID":"443cde2a-91e0-404e-a067-00558608d888","Type":"ContainerStarted","Data":"32c351f714bf154245eaf0fc9e4787762420b70fb9ae664c76939e75237d1d40"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.200485 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rdt7d" event={"ID":"07b29f36-8738-4aff-b55f-9bf0ce77e344","Type":"ContainerStarted","Data":"8d483905d884fd224c5eeba6e6fc981bf2d20bc585c08c6ea52636b3a277423b"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.200511 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rdt7d" event={"ID":"07b29f36-8738-4aff-b55f-9bf0ce77e344","Type":"ContainerStarted","Data":"7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.201561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerStarted","Data":"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.201586 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerStarted","Data":"499656a337329f0593fd3450efc69319e19cf6948ac74a665827ed786e96abf0"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.203206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerStarted","Data":"2ad5ec824ee5334baae153e8bf0deda8a0353c87f9ab8af6aa26ef61a6df8bb6"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.203249 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerStarted","Data":"65f0f8155e77589ab96edf50b44dd24bb2a5e1390d2dfdc7fc943def4e64f7c7"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.204488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5vq6" event={"ID":"a25ecf7c-a4b8-40e9-97b1-2b52c3094474","Type":"ContainerStarted","Data":"7b9aca73978f92a37a42dea1c3ada1057ff5ba25f851b9858677fcf99e249ffd"} Nov 24 17:08:20 crc kubenswrapper[4768]: I1124 17:08:20.229938 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-v5vq6" podStartSLOduration=3.313543796 podStartE2EDuration="25.229922611s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="2025-11-24 17:07:56.902296435 +0000 UTC m=+958.149265093" lastFinishedPulling="2025-11-24 17:08:18.81867525 +0000 UTC m=+980.065643908" observedRunningTime="2025-11-24 17:08:20.225679241 +0000 UTC m=+981.472647909" watchObservedRunningTime="2025-11-24 17:08:20.229922611 +0000 UTC m=+981.476891269" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.218708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerStarted","Data":"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be"} Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.224412 4768 generic.go:334] "Generic (PLEG): container finished" podID="a25ecf7c-a4b8-40e9-97b1-2b52c3094474" containerID="7b9aca73978f92a37a42dea1c3ada1057ff5ba25f851b9858677fcf99e249ffd" exitCode=0 Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.224558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5vq6" event={"ID":"a25ecf7c-a4b8-40e9-97b1-2b52c3094474","Type":"ContainerDied","Data":"7b9aca73978f92a37a42dea1c3ada1057ff5ba25f851b9858677fcf99e249ffd"} Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.234452 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerStarted","Data":"781b648f95fba0a21385dde8435f3d2e3be4edb0dfb2d49ae1b282b12b20427b"} Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.237248 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rdt7d" podStartSLOduration=15.237234754 podStartE2EDuration="15.237234754s" podCreationTimestamp="2025-11-24 17:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:20.243569597 +0000 UTC m=+981.490538255" watchObservedRunningTime="2025-11-24 17:08:21.237234754 +0000 UTC m=+982.484203412" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.237466 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.237461731 podStartE2EDuration="3.237461731s" podCreationTimestamp="2025-11-24 17:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:21.237296536 +0000 UTC m=+982.484265194" watchObservedRunningTime="2025-11-24 17:08:21.237461731 +0000 UTC m=+982.484430389" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.251155 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerStarted","Data":"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708"} Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.258088 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.258072163 podStartE2EDuration="13.258072163s" podCreationTimestamp="2025-11-24 17:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:21.252813314 +0000 UTC m=+982.499781972" watchObservedRunningTime="2025-11-24 17:08:21.258072163 +0000 UTC m=+982.505040821" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.617865 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cntsw" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.738963 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle\") pod \"14b5f872-0b2d-4937-bc26-dac18713087f\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.739015 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jhvh\" (UniqueName: \"kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh\") pod \"14b5f872-0b2d-4937-bc26-dac18713087f\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.739094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config\") pod \"14b5f872-0b2d-4937-bc26-dac18713087f\" (UID: \"14b5f872-0b2d-4937-bc26-dac18713087f\") " Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.745195 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh" (OuterVolumeSpecName: "kube-api-access-5jhvh") pod "14b5f872-0b2d-4937-bc26-dac18713087f" (UID: "14b5f872-0b2d-4937-bc26-dac18713087f"). InnerVolumeSpecName "kube-api-access-5jhvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.763169 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config" (OuterVolumeSpecName: "config") pod "14b5f872-0b2d-4937-bc26-dac18713087f" (UID: "14b5f872-0b2d-4937-bc26-dac18713087f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.778880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14b5f872-0b2d-4937-bc26-dac18713087f" (UID: "14b5f872-0b2d-4937-bc26-dac18713087f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.840752 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.840781 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jhvh\" (UniqueName: \"kubernetes.io/projected/14b5f872-0b2d-4937-bc26-dac18713087f-kube-api-access-5jhvh\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:21 crc kubenswrapper[4768]: I1124 17:08:21.840809 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/14b5f872-0b2d-4937-bc26-dac18713087f-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.260728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cntsw" event={"ID":"14b5f872-0b2d-4937-bc26-dac18713087f","Type":"ContainerDied","Data":"5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8"} Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.261077 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd785857142942159a87f12228a2781917eaf1ef732e98816e0af115a7ef9a8" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.260807 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cntsw" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408275 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:22 crc kubenswrapper[4768]: E1124 17:08:22.408602 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b5f872-0b2d-4937-bc26-dac18713087f" containerName="neutron-db-sync" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408619 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b5f872-0b2d-4937-bc26-dac18713087f" containerName="neutron-db-sync" Nov 24 17:08:22 crc kubenswrapper[4768]: E1124 17:08:22.408641 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408648 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" Nov 24 17:08:22 crc kubenswrapper[4768]: E1124 17:08:22.408671 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="init" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408678 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="init" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408818 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b5f872-0b2d-4937-bc26-dac18713087f" containerName="neutron-db-sync" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.408842 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.412503 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.430993 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454310 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454399 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454459 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.454533 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8f5\" (UniqueName: \"kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.543450 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.544716 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.546841 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.547083 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.547108 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.547437 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-z4qzl" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.555752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.555810 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8f5\" (UniqueName: \"kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.555858 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.555909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.555978 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.556000 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.556991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.557190 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.557211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.557337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.559634 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.563156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.574342 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8f5\" (UniqueName: \"kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5\") pod \"dnsmasq-dns-55f844cf75-79ljq\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.658110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.658182 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.658276 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.658413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tpjw\" (UniqueName: \"kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.659490 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.755412 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.762859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.762904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.762963 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.763015 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tpjw\" (UniqueName: \"kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.763040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.766411 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.766769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.766891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.780519 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tpjw\" (UniqueName: \"kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.781439 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle\") pod \"neutron-58cbfb7868-t7r6m\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:22 crc kubenswrapper[4768]: I1124 17:08:22.933053 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:23 crc kubenswrapper[4768]: I1124 17:08:23.008890 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-gnsqw" podUID="76b0f1d1-484b-4959-963f-35a843f11fcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Nov 24 17:08:23 crc kubenswrapper[4768]: I1124 17:08:23.279894 4768 generic.go:334] "Generic (PLEG): container finished" podID="07b29f36-8738-4aff-b55f-9bf0ce77e344" containerID="8d483905d884fd224c5eeba6e6fc981bf2d20bc585c08c6ea52636b3a277423b" exitCode=0 Nov 24 17:08:23 crc kubenswrapper[4768]: I1124 17:08:23.279939 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rdt7d" event={"ID":"07b29f36-8738-4aff-b55f-9bf0ce77e344","Type":"ContainerDied","Data":"8d483905d884fd224c5eeba6e6fc981bf2d20bc585c08c6ea52636b3a277423b"} Nov 24 17:08:23 crc kubenswrapper[4768]: E1124 17:08:23.974931 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.791936 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c9b47fdf7-ztl8b"] Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.794092 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.797811 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.806600 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.807653 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c9b47fdf7-ztl8b"] Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.903417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz2m7\" (UniqueName: \"kubernetes.io/projected/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-kube-api-access-tz2m7\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.903487 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-internal-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.903699 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-ovndb-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.903832 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-combined-ca-bundle\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.903938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.904072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-public-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:24 crc kubenswrapper[4768]: I1124 17:08:24.904157 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-httpd-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-combined-ca-bundle\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006123 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-public-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006192 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-httpd-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz2m7\" (UniqueName: \"kubernetes.io/projected/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-kube-api-access-tz2m7\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-internal-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.006309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-ovndb-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.012300 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-ovndb-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.012584 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-combined-ca-bundle\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.013450 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-public-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.013715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.014912 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-internal-tls-certs\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.025457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-httpd-config\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.034054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz2m7\" (UniqueName: \"kubernetes.io/projected/4ffadf60-9eff-4bf9-b0bd-9480cbd0d917-kube-api-access-tz2m7\") pod \"neutron-c9b47fdf7-ztl8b\" (UID: \"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917\") " pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:25 crc kubenswrapper[4768]: I1124 17:08:25.151820 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.449257 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.450097 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.481758 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.513723 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.570223 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.570267 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.606303 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 17:08:28 crc kubenswrapper[4768]: I1124 17:08:28.629056 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 17:08:29 crc kubenswrapper[4768]: I1124 17:08:29.331803 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:29 crc kubenswrapper[4768]: I1124 17:08:29.331848 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 17:08:29 crc kubenswrapper[4768]: I1124 17:08:29.331862 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 17:08:29 crc kubenswrapper[4768]: I1124 17:08:29.331875 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.138030 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.140764 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.241762 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.251056 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.368397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rdt7d" event={"ID":"07b29f36-8738-4aff-b55f-9bf0ce77e344","Type":"ContainerDied","Data":"7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b"} Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.368440 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f87e4288dfec6f01c78c7f6a583033f65ba46aff314b16bdc5461cf352cbc1b" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.406811 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-v5vq6" event={"ID":"a25ecf7c-a4b8-40e9-97b1-2b52c3094474","Type":"ContainerDied","Data":"d30828a170cb5cc651e2a844f16a1420884cda98f50854f7d4b62074d505731a"} Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.407153 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30828a170cb5cc651e2a844f16a1420884cda98f50854f7d4b62074d505731a" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.523908 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.548024 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5vq6" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644133 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfwgm\" (UniqueName: \"kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644527 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644561 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644581 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644822 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp5t4\" (UniqueName: \"kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4\") pod \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs\") pod \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644893 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts\") pod \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data\") pod \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.644978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle\") pod \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\" (UID: \"a25ecf7c-a4b8-40e9-97b1-2b52c3094474\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.645005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys\") pod \"07b29f36-8738-4aff-b55f-9bf0ce77e344\" (UID: \"07b29f36-8738-4aff-b55f-9bf0ce77e344\") " Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.645707 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs" (OuterVolumeSpecName: "logs") pod "a25ecf7c-a4b8-40e9-97b1-2b52c3094474" (UID: "a25ecf7c-a4b8-40e9-97b1-2b52c3094474"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.656535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4" (OuterVolumeSpecName: "kube-api-access-kp5t4") pod "a25ecf7c-a4b8-40e9-97b1-2b52c3094474" (UID: "a25ecf7c-a4b8-40e9-97b1-2b52c3094474"). InnerVolumeSpecName "kube-api-access-kp5t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.656631 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts" (OuterVolumeSpecName: "scripts") pod "a25ecf7c-a4b8-40e9-97b1-2b52c3094474" (UID: "a25ecf7c-a4b8-40e9-97b1-2b52c3094474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.659006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts" (OuterVolumeSpecName: "scripts") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.660517 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.664862 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.664860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm" (OuterVolumeSpecName: "kube-api-access-rfwgm") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "kube-api-access-rfwgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.687166 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.700949 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data" (OuterVolumeSpecName: "config-data") pod "a25ecf7c-a4b8-40e9-97b1-2b52c3094474" (UID: "a25ecf7c-a4b8-40e9-97b1-2b52c3094474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.701389 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data" (OuterVolumeSpecName: "config-data") pod "07b29f36-8738-4aff-b55f-9bf0ce77e344" (UID: "07b29f36-8738-4aff-b55f-9bf0ce77e344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.718880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a25ecf7c-a4b8-40e9-97b1-2b52c3094474" (UID: "a25ecf7c-a4b8-40e9-97b1-2b52c3094474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746812 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfwgm\" (UniqueName: \"kubernetes.io/projected/07b29f36-8738-4aff-b55f-9bf0ce77e344-kube-api-access-rfwgm\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746842 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746852 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746863 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746872 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp5t4\" (UniqueName: \"kubernetes.io/projected/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-kube-api-access-kp5t4\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746880 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746889 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746897 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746905 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746915 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25ecf7c-a4b8-40e9-97b1-2b52c3094474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.746924 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/07b29f36-8738-4aff-b55f-9bf0ce77e344-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:31 crc kubenswrapper[4768]: I1124 17:08:31.973500 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.121106 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:32 crc kubenswrapper[4768]: W1124 17:08:32.138033 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2ddf52_e603_44e4_a5ef_aa85afdc7c26.slice/crio-374c53fd5808380014ea96c88a695b3f44a05b46e3c69df43614f24b37b9e50d WatchSource:0}: Error finding container 374c53fd5808380014ea96c88a695b3f44a05b46e3c69df43614f24b37b9e50d: Status 404 returned error can't find the container with id 374c53fd5808380014ea96c88a695b3f44a05b46e3c69df43614f24b37b9e50d Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.154622 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c9b47fdf7-ztl8b"] Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.418771 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerStarted","Data":"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.419710 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerStarted","Data":"a97baba026709b249dcc0efe341b245bb72a02d046cceefd582f62e2776194ff"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.422269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerStarted","Data":"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.424159 4768 generic.go:334] "Generic (PLEG): container finished" podID="443cde2a-91e0-404e-a067-00558608d888" containerID="6ee905d67acee58de7d78ac1c9d2022cbf725c17e613623019bbaf25d72e2fca" exitCode=0 Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.424229 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jtdld" event={"ID":"443cde2a-91e0-404e-a067-00558608d888","Type":"ContainerDied","Data":"6ee905d67acee58de7d78ac1c9d2022cbf725c17e613623019bbaf25d72e2fca"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.427768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9b47fdf7-ztl8b" event={"ID":"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917","Type":"ContainerStarted","Data":"c20bf54244e573571e18842af0f1e6698afbca53cb00c8f2a04b66471f7b78ab"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.429461 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6qc9l" event={"ID":"56d73241-8027-4861-83ae-a766feceadd2","Type":"ContainerStarted","Data":"6c1ccef1f6f0fff3036ea6cddb7db4339f3ec232a3476a7e67984b2c5ac696fc"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.432872 4768 generic.go:334] "Generic (PLEG): container finished" podID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerID="f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b" exitCode=0 Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.433706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" event={"ID":"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26","Type":"ContainerDied","Data":"f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.433800 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" event={"ID":"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26","Type":"ContainerStarted","Data":"374c53fd5808380014ea96c88a695b3f44a05b46e3c69df43614f24b37b9e50d"} Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.433973 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-v5vq6" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.434208 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rdt7d" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.492916 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6qc9l" podStartSLOduration=2.032373361 podStartE2EDuration="37.492880278s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="2025-11-24 17:07:56.557859909 +0000 UTC m=+957.804828567" lastFinishedPulling="2025-11-24 17:08:32.018366826 +0000 UTC m=+993.265335484" observedRunningTime="2025-11-24 17:08:32.484180302 +0000 UTC m=+993.731148960" watchObservedRunningTime="2025-11-24 17:08:32.492880278 +0000 UTC m=+993.739848936" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.676040 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74667f8554-ph5sd"] Nov 24 17:08:32 crc kubenswrapper[4768]: E1124 17:08:32.676643 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25ecf7c-a4b8-40e9-97b1-2b52c3094474" containerName="placement-db-sync" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.676660 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25ecf7c-a4b8-40e9-97b1-2b52c3094474" containerName="placement-db-sync" Nov 24 17:08:32 crc kubenswrapper[4768]: E1124 17:08:32.676681 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b29f36-8738-4aff-b55f-9bf0ce77e344" containerName="keystone-bootstrap" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.676688 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b29f36-8738-4aff-b55f-9bf0ce77e344" containerName="keystone-bootstrap" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.676865 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b29f36-8738-4aff-b55f-9bf0ce77e344" containerName="keystone-bootstrap" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.676888 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25ecf7c-a4b8-40e9-97b1-2b52c3094474" containerName="placement-db-sync" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.677436 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.688338 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.688551 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.688660 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.688765 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.689007 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.689778 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-hz2rd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.690496 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58f546f576-kqv27"] Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.691923 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.695845 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.696012 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.696256 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bclrl" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.696499 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.696599 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.704469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74667f8554-ph5sd"] Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.716579 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58f546f576-kqv27"] Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768131 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czljr\" (UniqueName: \"kubernetes.io/projected/eff6ece5-de21-4541-96d3-7a82e5a1d789-kube-api-access-czljr\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768186 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-credential-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244d26f2-3748-48ba-ab9f-ba52e5ad5729-logs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768258 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-combined-ca-bundle\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-config-data\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-scripts\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768422 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-public-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768486 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8677k\" (UniqueName: \"kubernetes.io/projected/244d26f2-3748-48ba-ab9f-ba52e5ad5729-kube-api-access-8677k\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768537 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-fernet-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-scripts\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-combined-ca-bundle\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768722 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-internal-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768745 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-internal-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768776 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-config-data\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.768789 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-public-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-combined-ca-bundle\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872223 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-config-data\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872482 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-scripts\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-public-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8677k\" (UniqueName: \"kubernetes.io/projected/244d26f2-3748-48ba-ab9f-ba52e5ad5729-kube-api-access-8677k\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872623 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-fernet-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872663 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-scripts\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-combined-ca-bundle\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-internal-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-internal-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-config-data\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-public-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872941 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czljr\" (UniqueName: \"kubernetes.io/projected/eff6ece5-de21-4541-96d3-7a82e5a1d789-kube-api-access-czljr\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872964 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244d26f2-3748-48ba-ab9f-ba52e5ad5729-logs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.872989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-credential-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.886821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/244d26f2-3748-48ba-ab9f-ba52e5ad5729-logs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.914085 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-credential-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.914374 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-scripts\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.924219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-fernet-keys\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.934587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czljr\" (UniqueName: \"kubernetes.io/projected/eff6ece5-de21-4541-96d3-7a82e5a1d789-kube-api-access-czljr\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.936114 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-scripts\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.936226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-public-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.936583 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-public-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.938235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8677k\" (UniqueName: \"kubernetes.io/projected/244d26f2-3748-48ba-ab9f-ba52e5ad5729-kube-api-access-8677k\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.938257 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-internal-tls-certs\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.938414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-internal-tls-certs\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.938661 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-combined-ca-bundle\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.938949 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-combined-ca-bundle\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.940604 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff6ece5-de21-4541-96d3-7a82e5a1d789-config-data\") pod \"keystone-74667f8554-ph5sd\" (UID: \"eff6ece5-de21-4541-96d3-7a82e5a1d789\") " pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:32 crc kubenswrapper[4768]: I1124 17:08:32.959551 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/244d26f2-3748-48ba-ab9f-ba52e5ad5729-config-data\") pod \"placement-58f546f576-kqv27\" (UID: \"244d26f2-3748-48ba-ab9f-ba52e5ad5729\") " pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.046729 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.068510 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.477504 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerStarted","Data":"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad"} Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.478071 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.484662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jtdld" event={"ID":"443cde2a-91e0-404e-a067-00558608d888","Type":"ContainerStarted","Data":"3b5fbecf94d9fd1f7f9f919cf2a44a8aeac895cfdc36db870dd41ab13635a920"} Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.490604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9b47fdf7-ztl8b" event={"ID":"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917","Type":"ContainerStarted","Data":"1e8e100a6606b3e4a7eb22307e6be443cfca3da677d94d70c1637d9a4f1c301d"} Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.490628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c9b47fdf7-ztl8b" event={"ID":"4ffadf60-9eff-4bf9-b0bd-9480cbd0d917","Type":"ContainerStarted","Data":"fa13084c296a6acc2d3ed27749cf040ffb80848e8485744a08c116be7e72798e"} Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.490643 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.504230 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" event={"ID":"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26","Type":"ContainerStarted","Data":"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc"} Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.504410 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.505916 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58cbfb7868-t7r6m" podStartSLOduration=11.505899532 podStartE2EDuration="11.505899532s" podCreationTimestamp="2025-11-24 17:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:33.497216767 +0000 UTC m=+994.744185425" watchObservedRunningTime="2025-11-24 17:08:33.505899532 +0000 UTC m=+994.752868190" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.522163 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c9b47fdf7-ztl8b" podStartSLOduration=9.522144052 podStartE2EDuration="9.522144052s" podCreationTimestamp="2025-11-24 17:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:33.521338249 +0000 UTC m=+994.768306907" watchObservedRunningTime="2025-11-24 17:08:33.522144052 +0000 UTC m=+994.769112710" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.541963 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-jtdld" podStartSLOduration=16.535121864 podStartE2EDuration="28.541948332s" podCreationTimestamp="2025-11-24 17:08:05 +0000 UTC" firstStartedPulling="2025-11-24 17:08:19.435733252 +0000 UTC m=+980.682701900" lastFinishedPulling="2025-11-24 17:08:31.44255971 +0000 UTC m=+992.689528368" observedRunningTime="2025-11-24 17:08:33.538660559 +0000 UTC m=+994.785629217" watchObservedRunningTime="2025-11-24 17:08:33.541948332 +0000 UTC m=+994.788916990" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.572113 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" podStartSLOduration=11.572091423 podStartE2EDuration="11.572091423s" podCreationTimestamp="2025-11-24 17:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:33.562908824 +0000 UTC m=+994.809877482" watchObservedRunningTime="2025-11-24 17:08:33.572091423 +0000 UTC m=+994.819060081" Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.613339 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74667f8554-ph5sd"] Nov 24 17:08:33 crc kubenswrapper[4768]: I1124 17:08:33.725495 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58f546f576-kqv27"] Nov 24 17:08:34 crc kubenswrapper[4768]: E1124 17:08:34.300549 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.518122 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f546f576-kqv27" event={"ID":"244d26f2-3748-48ba-ab9f-ba52e5ad5729","Type":"ContainerStarted","Data":"19246d43d5d8bd26306bae23164c9c1aac28891c5ea2388176d1536bae713bd8"} Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.518162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f546f576-kqv27" event={"ID":"244d26f2-3748-48ba-ab9f-ba52e5ad5729","Type":"ContainerStarted","Data":"8d7a415f9c52be72c134ad5f8dbfdb190c9ed15098d4ccf92e7aa73d6195e2c8"} Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.518174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58f546f576-kqv27" event={"ID":"244d26f2-3748-48ba-ab9f-ba52e5ad5729","Type":"ContainerStarted","Data":"3faf8a4384719839ddc85ffadd5c43d5e42c0e5de8f65ed486ddebcda78127cc"} Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.518255 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.519902 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74667f8554-ph5sd" event={"ID":"eff6ece5-de21-4541-96d3-7a82e5a1d789","Type":"ContainerStarted","Data":"10dfd7492017b29f5091582926e1de3b83fadd117abda5048b9b2d65f0d47796"} Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.519943 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74667f8554-ph5sd" event={"ID":"eff6ece5-de21-4541-96d3-7a82e5a1d789","Type":"ContainerStarted","Data":"b4dc817b8e8de7365de8ea1a5b4e8e0b28ab654b05e2dd9162941deee8a5b987"} Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.538726 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58f546f576-kqv27" podStartSLOduration=2.538705626 podStartE2EDuration="2.538705626s" podCreationTimestamp="2025-11-24 17:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:34.537658427 +0000 UTC m=+995.784627085" watchObservedRunningTime="2025-11-24 17:08:34.538705626 +0000 UTC m=+995.785674284" Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.563947 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74667f8554-ph5sd" podStartSLOduration=2.563929399 podStartE2EDuration="2.563929399s" podCreationTimestamp="2025-11-24 17:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:34.55972912 +0000 UTC m=+995.806697778" watchObservedRunningTime="2025-11-24 17:08:34.563929399 +0000 UTC m=+995.810898057" Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.892901 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:08:34 crc kubenswrapper[4768]: I1124 17:08:34.893268 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:08:35 crc kubenswrapper[4768]: I1124 17:08:35.535131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8vsq" event={"ID":"632b579a-27e1-4431-a7ad-32631cf804b6","Type":"ContainerStarted","Data":"105f280ac06601dc5642a7a91bf4424487b9ed12801541053ef2ce3dce0e5b9a"} Nov 24 17:08:35 crc kubenswrapper[4768]: I1124 17:08:35.535239 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:08:35 crc kubenswrapper[4768]: I1124 17:08:35.537678 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:08:36 crc kubenswrapper[4768]: I1124 17:08:36.543774 4768 generic.go:334] "Generic (PLEG): container finished" podID="56d73241-8027-4861-83ae-a766feceadd2" containerID="6c1ccef1f6f0fff3036ea6cddb7db4339f3ec232a3476a7e67984b2c5ac696fc" exitCode=0 Nov 24 17:08:36 crc kubenswrapper[4768]: I1124 17:08:36.543850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6qc9l" event={"ID":"56d73241-8027-4861-83ae-a766feceadd2","Type":"ContainerDied","Data":"6c1ccef1f6f0fff3036ea6cddb7db4339f3ec232a3476a7e67984b2c5ac696fc"} Nov 24 17:08:36 crc kubenswrapper[4768]: I1124 17:08:36.570298 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-w8vsq" podStartSLOduration=4.149738902 podStartE2EDuration="41.570275631s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="2025-11-24 17:07:56.748658762 +0000 UTC m=+957.995627410" lastFinishedPulling="2025-11-24 17:08:34.169195481 +0000 UTC m=+995.416164139" observedRunningTime="2025-11-24 17:08:35.560292873 +0000 UTC m=+996.807261531" watchObservedRunningTime="2025-11-24 17:08:36.570275631 +0000 UTC m=+997.817244289" Nov 24 17:08:37 crc kubenswrapper[4768]: I1124 17:08:37.761832 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:37 crc kubenswrapper[4768]: I1124 17:08:37.829844 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:08:37 crc kubenswrapper[4768]: I1124 17:08:37.834409 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="dnsmasq-dns" containerID="cri-o://5a2917a24aea688028ce8529ae09e36ec0e1b098c10c7697bcb134920bd1cfb1" gracePeriod=10 Nov 24 17:08:38 crc kubenswrapper[4768]: I1124 17:08:38.572615 4768 generic.go:334] "Generic (PLEG): container finished" podID="59eb907c-4af0-495b-9885-b144bc2d611d" containerID="5a2917a24aea688028ce8529ae09e36ec0e1b098c10c7697bcb134920bd1cfb1" exitCode=0 Nov 24 17:08:38 crc kubenswrapper[4768]: I1124 17:08:38.572657 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" event={"ID":"59eb907c-4af0-495b-9885-b144bc2d611d","Type":"ContainerDied","Data":"5a2917a24aea688028ce8529ae09e36ec0e1b098c10c7697bcb134920bd1cfb1"} Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.120515 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.125847 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.247811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nchm\" (UniqueName: \"kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.247891 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.247911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.247936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data\") pod \"56d73241-8027-4861-83ae-a766feceadd2\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.248685 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle\") pod \"56d73241-8027-4861-83ae-a766feceadd2\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.248734 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.248758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhp7l\" (UniqueName: \"kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l\") pod \"56d73241-8027-4861-83ae-a766feceadd2\" (UID: \"56d73241-8027-4861-83ae-a766feceadd2\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.248784 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.248823 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0\") pod \"59eb907c-4af0-495b-9885-b144bc2d611d\" (UID: \"59eb907c-4af0-495b-9885-b144bc2d611d\") " Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.254267 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm" (OuterVolumeSpecName: "kube-api-access-4nchm") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "kube-api-access-4nchm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.254277 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "56d73241-8027-4861-83ae-a766feceadd2" (UID: "56d73241-8027-4861-83ae-a766feceadd2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.274107 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l" (OuterVolumeSpecName: "kube-api-access-vhp7l") pod "56d73241-8027-4861-83ae-a766feceadd2" (UID: "56d73241-8027-4861-83ae-a766feceadd2"). InnerVolumeSpecName "kube-api-access-vhp7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.296735 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.299669 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56d73241-8027-4861-83ae-a766feceadd2" (UID: "56d73241-8027-4861-83ae-a766feceadd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.302004 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.308280 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.315875 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config" (OuterVolumeSpecName: "config") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.317286 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59eb907c-4af0-495b-9885-b144bc2d611d" (UID: "59eb907c-4af0-495b-9885-b144bc2d611d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350437 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350467 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350476 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhp7l\" (UniqueName: \"kubernetes.io/projected/56d73241-8027-4861-83ae-a766feceadd2-kube-api-access-vhp7l\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350487 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350496 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350506 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nchm\" (UniqueName: \"kubernetes.io/projected/59eb907c-4af0-495b-9885-b144bc2d611d-kube-api-access-4nchm\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350516 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350525 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59eb907c-4af0-495b-9885-b144bc2d611d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.350532 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56d73241-8027-4861-83ae-a766feceadd2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.599117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6qc9l" event={"ID":"56d73241-8027-4861-83ae-a766feceadd2","Type":"ContainerDied","Data":"bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319"} Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.599151 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfffe2da4f2011991c3b48eeb053c23607b70815114b192afcacd99512ee5319" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.599194 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6qc9l" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.607531 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" event={"ID":"59eb907c-4af0-495b-9885-b144bc2d611d","Type":"ContainerDied","Data":"41d14101d585badf6d08505ed96952020e424045475b55ae9fc35a4efaa12ad2"} Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.607568 4768 scope.go:117] "RemoveContainer" containerID="5a2917a24aea688028ce8529ae09e36ec0e1b098c10c7697bcb134920bd1cfb1" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.607592 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-rtr97" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.639574 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.643004 4768 scope.go:117] "RemoveContainer" containerID="6dd86d526917d2b075e0c5cc81dc33821cb98ae8c4f0120fc1d17727b92bbc34" Nov 24 17:08:40 crc kubenswrapper[4768]: I1124 17:08:40.646118 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-rtr97"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.384169 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fdbb4868-m84ml"] Nov 24 17:08:41 crc kubenswrapper[4768]: E1124 17:08:41.384947 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d73241-8027-4861-83ae-a766feceadd2" containerName="barbican-db-sync" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.384963 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d73241-8027-4861-83ae-a766feceadd2" containerName="barbican-db-sync" Nov 24 17:08:41 crc kubenswrapper[4768]: E1124 17:08:41.384979 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="init" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.384985 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="init" Nov 24 17:08:41 crc kubenswrapper[4768]: E1124 17:08:41.385007 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="dnsmasq-dns" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.385013 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="dnsmasq-dns" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.385184 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d73241-8027-4861-83ae-a766feceadd2" containerName="barbican-db-sync" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.385215 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" containerName="dnsmasq-dns" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.386136 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.392421 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.392727 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5c5z8" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.392990 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.415423 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-864dc88cf9-8c7r4"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.417458 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.419532 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.439426 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fdbb4868-m84ml"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.456565 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-864dc88cf9-8c7r4"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.470764 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.472022 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.473902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.473965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758f8654-5012-43b2-a4b5-adc902722254-logs\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.474005 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-combined-ca-bundle\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.474025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgn5\" (UniqueName: \"kubernetes.io/projected/758f8654-5012-43b2-a4b5-adc902722254-kube-api-access-mfgn5\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.474053 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data-custom\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.545662 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575392 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data-custom\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575446 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhkx\" (UniqueName: \"kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575485 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575502 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575530 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575551 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af648a4f-aca8-4b51-8650-6990ae26b259-logs\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575572 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575591 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-combined-ca-bundle\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758f8654-5012-43b2-a4b5-adc902722254-logs\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575684 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data-custom\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575747 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-combined-ca-bundle\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdk4\" (UniqueName: \"kubernetes.io/projected/af648a4f-aca8-4b51-8650-6990ae26b259-kube-api-access-9mdk4\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.575782 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgn5\" (UniqueName: \"kubernetes.io/projected/758f8654-5012-43b2-a4b5-adc902722254-kube-api-access-mfgn5\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.579815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758f8654-5012-43b2-a4b5-adc902722254-logs\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.584740 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data-custom\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.587739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-combined-ca-bundle\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.596235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758f8654-5012-43b2-a4b5-adc902722254-config-data\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.604771 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59eb907c-4af0-495b-9885-b144bc2d611d" path="/var/lib/kubelet/pods/59eb907c-4af0-495b-9885-b144bc2d611d/volumes" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.609997 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgn5\" (UniqueName: \"kubernetes.io/projected/758f8654-5012-43b2-a4b5-adc902722254-kube-api-access-mfgn5\") pod \"barbican-keystone-listener-7fdbb4868-m84ml\" (UID: \"758f8654-5012-43b2-a4b5-adc902722254\") " pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.624008 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.678790 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.678866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.678932 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.678957 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af648a4f-aca8-4b51-8650-6990ae26b259-logs\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.678983 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-combined-ca-bundle\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data-custom\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdk4\" (UniqueName: \"kubernetes.io/projected/af648a4f-aca8-4b51-8650-6990ae26b259-kube-api-access-9mdk4\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.679258 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhkx\" (UniqueName: \"kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.681001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.681640 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.682544 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af648a4f-aca8-4b51-8650-6990ae26b259-logs\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.682750 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.682990 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.683497 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.685646 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.686021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.688677 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-combined-ca-bundle\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.692243 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.707315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.715720 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af648a4f-aca8-4b51-8650-6990ae26b259-config-data-custom\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.716328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdk4\" (UniqueName: \"kubernetes.io/projected/af648a4f-aca8-4b51-8650-6990ae26b259-kube-api-access-9mdk4\") pod \"barbican-worker-864dc88cf9-8c7r4\" (UID: \"af648a4f-aca8-4b51-8650-6990ae26b259\") " pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.720934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhkx\" (UniqueName: \"kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx\") pod \"dnsmasq-dns-85ff748b95-2c94v\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.780048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.780275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.780373 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.780450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d868t\" (UniqueName: \"kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.780541 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.802910 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.817761 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-864dc88cf9-8c7r4" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.831654 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.881997 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.882042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.882065 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d868t\" (UniqueName: \"kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.882100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.882141 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.882740 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.886002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.886576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.886912 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:41 crc kubenswrapper[4768]: I1124 17:08:41.916731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d868t\" (UniqueName: \"kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t\") pod \"barbican-api-86669456c4-fp95m\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.067546 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.319695 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fdbb4868-m84ml"] Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.379557 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-864dc88cf9-8c7r4"] Nov 24 17:08:42 crc kubenswrapper[4768]: W1124 17:08:42.387680 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf648a4f_aca8_4b51_8650_6990ae26b259.slice/crio-bb74785a3e2d574c5f85b1b935753361ce3545998d4076ddb76fc989763bc062 WatchSource:0}: Error finding container bb74785a3e2d574c5f85b1b935753361ce3545998d4076ddb76fc989763bc062: Status 404 returned error can't find the container with id bb74785a3e2d574c5f85b1b935753361ce3545998d4076ddb76fc989763bc062 Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.423058 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.483586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.695416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerStarted","Data":"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.695492 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerStarted","Data":"51b4163429982f5e0832154a90aa6367e4047ab7f9adef8d5fc8b284e153406c"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.703941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerStarted","Data":"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.704162 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-central-agent" containerID="cri-o://8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0" gracePeriod=30 Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.704484 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.704476 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="sg-core" containerID="cri-o://defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e" gracePeriod=30 Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.704513 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-notification-agent" containerID="cri-o://e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708" gracePeriod=30 Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.704666 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="proxy-httpd" containerID="cri-o://4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7" gracePeriod=30 Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.721074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerStarted","Data":"df72553b10729b293fc79189ea00e00441d3d10f6abb484150add5a7fc777dac"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.723439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" event={"ID":"758f8654-5012-43b2-a4b5-adc902722254","Type":"ContainerStarted","Data":"6cb821938c96115e1d308212e1d7018bf3d628ff0e86fbf06493282da4e4d4ff"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.734478 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-864dc88cf9-8c7r4" event={"ID":"af648a4f-aca8-4b51-8650-6990ae26b259","Type":"ContainerStarted","Data":"bb74785a3e2d574c5f85b1b935753361ce3545998d4076ddb76fc989763bc062"} Nov 24 17:08:42 crc kubenswrapper[4768]: I1124 17:08:42.739361 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.891907069 podStartE2EDuration="47.739331758s" podCreationTimestamp="2025-11-24 17:07:55 +0000 UTC" firstStartedPulling="2025-11-24 17:07:56.50446499 +0000 UTC m=+957.751433648" lastFinishedPulling="2025-11-24 17:08:41.351889679 +0000 UTC m=+1002.598858337" observedRunningTime="2025-11-24 17:08:42.737508316 +0000 UTC m=+1003.984476974" watchObservedRunningTime="2025-11-24 17:08:42.739331758 +0000 UTC m=+1003.986300406" Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.746052 4768 generic.go:334] "Generic (PLEG): container finished" podID="632b579a-27e1-4431-a7ad-32631cf804b6" containerID="105f280ac06601dc5642a7a91bf4424487b9ed12801541053ef2ce3dce0e5b9a" exitCode=0 Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.746185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8vsq" event={"ID":"632b579a-27e1-4431-a7ad-32631cf804b6","Type":"ContainerDied","Data":"105f280ac06601dc5642a7a91bf4424487b9ed12801541053ef2ce3dce0e5b9a"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.751395 4768 generic.go:334] "Generic (PLEG): container finished" podID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerID="14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18" exitCode=0 Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.751475 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerDied","Data":"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753588 4768 generic.go:334] "Generic (PLEG): container finished" podID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerID="4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7" exitCode=0 Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753612 4768 generic.go:334] "Generic (PLEG): container finished" podID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerID="defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e" exitCode=2 Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753621 4768 generic.go:334] "Generic (PLEG): container finished" podID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerID="8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0" exitCode=0 Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerDied","Data":"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerDied","Data":"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.753692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerDied","Data":"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.754935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerStarted","Data":"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.754955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerStarted","Data":"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d"} Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.755427 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.755447 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:43 crc kubenswrapper[4768]: I1124 17:08:43.802360 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86669456c4-fp95m" podStartSLOduration=2.8023319239999998 podStartE2EDuration="2.802331924s" podCreationTimestamp="2025-11-24 17:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:43.799657438 +0000 UTC m=+1005.046626096" watchObservedRunningTime="2025-11-24 17:08:43.802331924 +0000 UTC m=+1005.049300582" Nov 24 17:08:44 crc kubenswrapper[4768]: E1124 17:08:44.546030 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.767548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerStarted","Data":"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8"} Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.768153 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.770178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" event={"ID":"758f8654-5012-43b2-a4b5-adc902722254","Type":"ContainerStarted","Data":"b31eb2163550c5e250bd9ae2876f02eb8c70f3aba08c76ac5985109029d93512"} Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.770234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" event={"ID":"758f8654-5012-43b2-a4b5-adc902722254","Type":"ContainerStarted","Data":"bcf4e523efc9529bc91641e059a0ced18cc309c5f25d5535c25e663887788182"} Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.776182 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-864dc88cf9-8c7r4" event={"ID":"af648a4f-aca8-4b51-8650-6990ae26b259","Type":"ContainerStarted","Data":"8a22f9840e0cbca099ad37bcc01706ec1317f8379efa4d07bdf2f721a5390b4f"} Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.776228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-864dc88cf9-8c7r4" event={"ID":"af648a4f-aca8-4b51-8650-6990ae26b259","Type":"ContainerStarted","Data":"882a4d297833b542b3a3180873cedc1d1002e497f41a19c1f619b47592a42513"} Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.800228 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" podStartSLOduration=3.80020742 podStartE2EDuration="3.80020742s" podCreationTimestamp="2025-11-24 17:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:44.799967644 +0000 UTC m=+1006.046936302" watchObservedRunningTime="2025-11-24 17:08:44.80020742 +0000 UTC m=+1006.047176088" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.827126 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-54d9965d5d-g2r7n"] Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.829765 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.832926 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.833189 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.856088 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54d9965d5d-g2r7n"] Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.876609 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-864dc88cf9-8c7r4" podStartSLOduration=2.2276179 podStartE2EDuration="3.876585899s" podCreationTimestamp="2025-11-24 17:08:41 +0000 UTC" firstStartedPulling="2025-11-24 17:08:42.390132127 +0000 UTC m=+1003.637100785" lastFinishedPulling="2025-11-24 17:08:44.039100126 +0000 UTC m=+1005.286068784" observedRunningTime="2025-11-24 17:08:44.821096971 +0000 UTC m=+1006.068065649" watchObservedRunningTime="2025-11-24 17:08:44.876585899 +0000 UTC m=+1006.123554557" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.897364 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fdbb4868-m84ml" podStartSLOduration=2.180384174 podStartE2EDuration="3.897325125s" podCreationTimestamp="2025-11-24 17:08:41 +0000 UTC" firstStartedPulling="2025-11-24 17:08:42.323301418 +0000 UTC m=+1003.570270076" lastFinishedPulling="2025-11-24 17:08:44.040242369 +0000 UTC m=+1005.287211027" observedRunningTime="2025-11-24 17:08:44.846491939 +0000 UTC m=+1006.093460607" watchObservedRunningTime="2025-11-24 17:08:44.897325125 +0000 UTC m=+1006.144293783" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.937590 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-combined-ca-bundle\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.937671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.937706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-public-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.937762 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb91316-55e3-466f-bc29-314359383931-logs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.938085 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data-custom\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.938145 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwcf\" (UniqueName: \"kubernetes.io/projected/0eb91316-55e3-466f-bc29-314359383931-kube-api-access-6hwcf\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:44 crc kubenswrapper[4768]: I1124 17:08:44.938321 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-internal-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.041806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hwcf\" (UniqueName: \"kubernetes.io/projected/0eb91316-55e3-466f-bc29-314359383931-kube-api-access-6hwcf\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.041901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-internal-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.041938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-combined-ca-bundle\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.041974 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.042029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-public-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.042071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb91316-55e3-466f-bc29-314359383931-logs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.042244 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data-custom\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.046982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eb91316-55e3-466f-bc29-314359383931-logs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.051730 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-internal-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.052906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-public-tls-certs\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.060679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-combined-ca-bundle\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.061621 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.063792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hwcf\" (UniqueName: \"kubernetes.io/projected/0eb91316-55e3-466f-bc29-314359383931-kube-api-access-6hwcf\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.076202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eb91316-55e3-466f-bc29-314359383931-config-data-custom\") pod \"barbican-api-54d9965d5d-g2r7n\" (UID: \"0eb91316-55e3-466f-bc29-314359383931\") " pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.177026 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.180845 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.347856 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348279 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348310 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xwn\" (UniqueName: \"kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348430 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data\") pod \"632b579a-27e1-4431-a7ad-32631cf804b6\" (UID: \"632b579a-27e1-4431-a7ad-32631cf804b6\") " Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348509 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.348738 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/632b579a-27e1-4431-a7ad-32631cf804b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.353586 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn" (OuterVolumeSpecName: "kube-api-access-92xwn") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "kube-api-access-92xwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.354086 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.356451 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts" (OuterVolumeSpecName: "scripts") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.382494 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.430312 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data" (OuterVolumeSpecName: "config-data") pod "632b579a-27e1-4431-a7ad-32631cf804b6" (UID: "632b579a-27e1-4431-a7ad-32631cf804b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.450217 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.450247 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.450259 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.450267 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92xwn\" (UniqueName: \"kubernetes.io/projected/632b579a-27e1-4431-a7ad-32631cf804b6-kube-api-access-92xwn\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.450276 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632b579a-27e1-4431-a7ad-32631cf804b6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.657208 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-54d9965d5d-g2r7n"] Nov 24 17:08:45 crc kubenswrapper[4768]: W1124 17:08:45.670412 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb91316_55e3_466f_bc29_314359383931.slice/crio-85bb864ca61a435b0e3d23bfc213d27c7c4c6a6a6fe37e3f21d5f6a6d3e88422 WatchSource:0}: Error finding container 85bb864ca61a435b0e3d23bfc213d27c7c4c6a6a6fe37e3f21d5f6a6d3e88422: Status 404 returned error can't find the container with id 85bb864ca61a435b0e3d23bfc213d27c7c4c6a6a6fe37e3f21d5f6a6d3e88422 Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.789677 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54d9965d5d-g2r7n" event={"ID":"0eb91316-55e3-466f-bc29-314359383931","Type":"ContainerStarted","Data":"85bb864ca61a435b0e3d23bfc213d27c7c4c6a6a6fe37e3f21d5f6a6d3e88422"} Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.794094 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-w8vsq" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.796234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-w8vsq" event={"ID":"632b579a-27e1-4431-a7ad-32631cf804b6","Type":"ContainerDied","Data":"63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a"} Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.796262 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63966f2c506edf91fe0302238b8c69346252127a99f6d0611244fb1b0af5455a" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.999265 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:08:45 crc kubenswrapper[4768]: E1124 17:08:45.999647 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" containerName="cinder-db-sync" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.999664 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" containerName="cinder-db-sync" Nov 24 17:08:45 crc kubenswrapper[4768]: I1124 17:08:45.999936 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" containerName="cinder-db-sync" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.007481 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.008333 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.012783 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.012932 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l5fgx" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.013401 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.016990 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.063999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.064040 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.064062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.064081 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.064102 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pvb\" (UniqueName: \"kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.064197 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.081329 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.122482 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.126368 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166034 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166149 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.166209 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pvb\" (UniqueName: \"kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.169128 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.173113 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.181868 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.182246 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.183073 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.187134 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.207007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pvb\" (UniqueName: \"kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb\") pod \"cinder-scheduler-0\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.268519 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.268850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwbb7\" (UniqueName: \"kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.268894 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.268924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.268975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.269000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.276847 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.278314 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.280397 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.288624 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.332729 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372090 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372111 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbpt\" (UniqueName: \"kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372156 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372171 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372189 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372253 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwbb7\" (UniqueName: \"kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.372416 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.373202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.373711 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.374222 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.374498 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.374751 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.393512 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwbb7\" (UniqueName: \"kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7\") pod \"dnsmasq-dns-5c9776ccc5-fhwr9\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbpt\" (UniqueName: \"kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473848 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473863 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.473882 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.474224 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.487464 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.489000 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.495989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.498574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.503223 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.518549 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbpt\" (UniqueName: \"kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.518958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom\") pod \"cinder-api-0\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.606121 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.650962 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779123 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779168 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97r92\" (UniqueName: \"kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779188 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779240 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779274 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.779379 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data\") pod \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\" (UID: \"e1869f53-e1c3-4194-a66f-8d16238e0fe3\") " Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.780681 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.780995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.784471 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts" (OuterVolumeSpecName: "scripts") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.786445 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92" (OuterVolumeSpecName: "kube-api-access-97r92") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "kube-api-access-97r92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.802975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54d9965d5d-g2r7n" event={"ID":"0eb91316-55e3-466f-bc29-314359383931","Type":"ContainerStarted","Data":"90b836bfc68fd72cd45b8bc44016af179358c0c15e045399afd7109068f556a7"} Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.804976 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.805097 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.805176 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-54d9965d5d-g2r7n" event={"ID":"0eb91316-55e3-466f-bc29-314359383931","Type":"ContainerStarted","Data":"a90fdb41a9b9fd4e1a7afd0a12d1c53a5a5d4d5c69a0bcabe94386d9e828b41d"} Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.807546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.809987 4768 generic.go:334] "Generic (PLEG): container finished" podID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerID="e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708" exitCode=0 Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.810291 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.810329 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerDied","Data":"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708"} Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.810407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e1869f53-e1c3-4194-a66f-8d16238e0fe3","Type":"ContainerDied","Data":"cc6266c404d4ff4fab4b55436adb1a0827accbd88dfaf438db2e14f7edcb050e"} Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.810448 4768 scope.go:117] "RemoveContainer" containerID="4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.810836 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="dnsmasq-dns" containerID="cri-o://0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8" gracePeriod=10 Nov 24 17:08:46 crc kubenswrapper[4768]: W1124 17:08:46.832884 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5366f3a_38c2_40ba_b778_e7487762f88e.slice/crio-9899ec8ef7f986bd76113ca114eb570c34ff5e90faa9ef3de83516ef54ebefed WatchSource:0}: Error finding container 9899ec8ef7f986bd76113ca114eb570c34ff5e90faa9ef3de83516ef54ebefed: Status 404 returned error can't find the container with id 9899ec8ef7f986bd76113ca114eb570c34ff5e90faa9ef3de83516ef54ebefed Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.839073 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.846448 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-54d9965d5d-g2r7n" podStartSLOduration=2.846430359 podStartE2EDuration="2.846430359s" podCreationTimestamp="2025-11-24 17:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:46.828523823 +0000 UTC m=+1008.075492511" watchObservedRunningTime="2025-11-24 17:08:46.846430359 +0000 UTC m=+1008.093399007" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.849710 4768 scope.go:117] "RemoveContainer" containerID="defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.882515 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.882537 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97r92\" (UniqueName: \"kubernetes.io/projected/e1869f53-e1c3-4194-a66f-8d16238e0fe3-kube-api-access-97r92\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.882547 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.882557 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.882565 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e1869f53-e1c3-4194-a66f-8d16238e0fe3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.897588 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.902796 4768 scope.go:117] "RemoveContainer" containerID="e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.928800 4768 scope.go:117] "RemoveContainer" containerID="8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.937281 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data" (OuterVolumeSpecName: "config-data") pod "e1869f53-e1c3-4194-a66f-8d16238e0fe3" (UID: "e1869f53-e1c3-4194-a66f-8d16238e0fe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.954443 4768 scope.go:117] "RemoveContainer" containerID="4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7" Nov 24 17:08:46 crc kubenswrapper[4768]: E1124 17:08:46.955024 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7\": container with ID starting with 4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7 not found: ID does not exist" containerID="4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.955080 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7"} err="failed to get container status \"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7\": rpc error: code = NotFound desc = could not find container \"4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7\": container with ID starting with 4d0421fd795547d9afbea9a0c6d6cdd80615efbd3319c43ccd51cbd82a0060d7 not found: ID does not exist" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.955110 4768 scope.go:117] "RemoveContainer" containerID="defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e" Nov 24 17:08:46 crc kubenswrapper[4768]: E1124 17:08:46.955735 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e\": container with ID starting with defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e not found: ID does not exist" containerID="defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.955777 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e"} err="failed to get container status \"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e\": rpc error: code = NotFound desc = could not find container \"defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e\": container with ID starting with defbfdb622fc5f880b384dd4ea57975ec488999b0f6c9d0b78a556d6626e9c4e not found: ID does not exist" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.955808 4768 scope.go:117] "RemoveContainer" containerID="e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708" Nov 24 17:08:46 crc kubenswrapper[4768]: E1124 17:08:46.956318 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708\": container with ID starting with e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708 not found: ID does not exist" containerID="e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.956355 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708"} err="failed to get container status \"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708\": rpc error: code = NotFound desc = could not find container \"e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708\": container with ID starting with e3e7ae9be55c2966455d44a4b641633d27c675aeb43d53acfa33c92fdf151708 not found: ID does not exist" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.956372 4768 scope.go:117] "RemoveContainer" containerID="8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0" Nov 24 17:08:46 crc kubenswrapper[4768]: E1124 17:08:46.956556 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0\": container with ID starting with 8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0 not found: ID does not exist" containerID="8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.956585 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0"} err="failed to get container status \"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0\": rpc error: code = NotFound desc = could not find container \"8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0\": container with ID starting with 8ec5009a1aa05e7dcf1ea752dd6c371af21bd69d151bd859e9fe1786fd0938b0 not found: ID does not exist" Nov 24 17:08:46 crc kubenswrapper[4768]: W1124 17:08:46.961815 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda75d66a_010d_483d_b623_70707cc9af95.slice/crio-eb244a2764d6e21aad2ae2fd0b64882429866702b472238dd426ec312b9f8fcf WatchSource:0}: Error finding container eb244a2764d6e21aad2ae2fd0b64882429866702b472238dd426ec312b9f8fcf: Status 404 returned error can't find the container with id eb244a2764d6e21aad2ae2fd0b64882429866702b472238dd426ec312b9f8fcf Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.961871 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.984116 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:46 crc kubenswrapper[4768]: I1124 17:08:46.984143 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1869f53-e1c3-4194-a66f-8d16238e0fe3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.065664 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.161559 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.170234 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.179830 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:08:47 crc kubenswrapper[4768]: E1124 17:08:47.180172 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="proxy-httpd" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180188 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="proxy-httpd" Nov 24 17:08:47 crc kubenswrapper[4768]: E1124 17:08:47.180204 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-notification-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180210 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-notification-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: E1124 17:08:47.180228 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-central-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180234 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-central-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: E1124 17:08:47.180245 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="sg-core" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180251 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="sg-core" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180761 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="sg-core" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180782 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-central-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180789 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="ceilometer-notification-agent" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.180804 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" containerName="proxy-httpd" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.182470 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.188294 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.189198 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.211996 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.229211 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.290921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhkx\" (UniqueName: \"kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291039 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291113 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291193 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291265 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0\") pod \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\" (UID: \"71fb0884-4764-4f3e-bcd7-b2227e9b24d2\") " Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291597 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291693 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvwc\" (UniqueName: \"kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.291709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.307202 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx" (OuterVolumeSpecName: "kube-api-access-shhkx") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "kube-api-access-shhkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.363029 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.369379 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.378528 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.379797 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config" (OuterVolumeSpecName: "config") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.389960 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71fb0884-4764-4f3e-bcd7-b2227e9b24d2" (UID: "71fb0884-4764-4f3e-bcd7-b2227e9b24d2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394006 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394118 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394162 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394202 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvwc\" (UniqueName: \"kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394269 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394369 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shhkx\" (UniqueName: \"kubernetes.io/projected/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-kube-api-access-shhkx\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394382 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394391 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394401 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394409 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.394417 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71fb0884-4764-4f3e-bcd7-b2227e9b24d2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.395769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.396529 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.401205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.402747 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.404933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.411217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvwc\" (UniqueName: \"kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.414739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.514370 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.612392 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1869f53-e1c3-4194-a66f-8d16238e0fe3" path="/var/lib/kubelet/pods/e1869f53-e1c3-4194-a66f-8d16238e0fe3/volumes" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.832924 4768 generic.go:334] "Generic (PLEG): container finished" podID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerID="0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8" exitCode=0 Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.833688 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.833856 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerDied","Data":"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.833907 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-2c94v" event={"ID":"71fb0884-4764-4f3e-bcd7-b2227e9b24d2","Type":"ContainerDied","Data":"51b4163429982f5e0832154a90aa6367e4047ab7f9adef8d5fc8b284e153406c"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.833923 4768 scope.go:117] "RemoveContainer" containerID="0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.835590 4768 generic.go:334] "Generic (PLEG): container finished" podID="da75d66a-010d-483d-b623-70707cc9af95" containerID="9fbf61905924dd8bbb117d59e6882d8b6624fedca2da0cf2f9f0bda16603451c" exitCode=0 Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.835642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" event={"ID":"da75d66a-010d-483d-b623-70707cc9af95","Type":"ContainerDied","Data":"9fbf61905924dd8bbb117d59e6882d8b6624fedca2da0cf2f9f0bda16603451c"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.835667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" event={"ID":"da75d66a-010d-483d-b623-70707cc9af95","Type":"ContainerStarted","Data":"eb244a2764d6e21aad2ae2fd0b64882429866702b472238dd426ec312b9f8fcf"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.844697 4768 generic.go:334] "Generic (PLEG): container finished" podID="443cde2a-91e0-404e-a067-00558608d888" containerID="3b5fbecf94d9fd1f7f9f919cf2a44a8aeac895cfdc36db870dd41ab13635a920" exitCode=0 Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.844766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jtdld" event={"ID":"443cde2a-91e0-404e-a067-00558608d888","Type":"ContainerDied","Data":"3b5fbecf94d9fd1f7f9f919cf2a44a8aeac895cfdc36db870dd41ab13635a920"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.845766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerStarted","Data":"9899ec8ef7f986bd76113ca114eb570c34ff5e90faa9ef3de83516ef54ebefed"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.847330 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerStarted","Data":"dfb379a64d9911ab54c9fef2141806ff2c574e8aa779b4c85f7c6a919768bd95"} Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.874092 4768 scope.go:117] "RemoveContainer" containerID="14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18" Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.910039 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:47 crc kubenswrapper[4768]: I1124 17:08:47.921477 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-2c94v"] Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.000496 4768 scope.go:117] "RemoveContainer" containerID="0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8" Nov 24 17:08:48 crc kubenswrapper[4768]: E1124 17:08:48.001029 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8\": container with ID starting with 0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8 not found: ID does not exist" containerID="0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.001064 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8"} err="failed to get container status \"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8\": rpc error: code = NotFound desc = could not find container \"0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8\": container with ID starting with 0ae20eba14eb3a3912e3d77ec46d5180fe17f3c0dba5d2d061244a74a36520b8 not found: ID does not exist" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.001085 4768 scope.go:117] "RemoveContainer" containerID="14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18" Nov 24 17:08:48 crc kubenswrapper[4768]: E1124 17:08:48.001335 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18\": container with ID starting with 14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18 not found: ID does not exist" containerID="14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.001370 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18"} err="failed to get container status \"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18\": rpc error: code = NotFound desc = could not find container \"14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18\": container with ID starting with 14f20c9d6a0486a5497a1416e2d706122cc3068698c9cce2f5ecd9d091d53b18 not found: ID does not exist" Nov 24 17:08:48 crc kubenswrapper[4768]: W1124 17:08:48.028368 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e15cdbb_3aa0_43e4_8b2b_8f1bec9b1b3c.slice/crio-11c3770a298fbd38a42b575eb073a260a11e15235b6b4a94b72fa8d8dc0f2a9b WatchSource:0}: Error finding container 11c3770a298fbd38a42b575eb073a260a11e15235b6b4a94b72fa8d8dc0f2a9b: Status 404 returned error can't find the container with id 11c3770a298fbd38a42b575eb073a260a11e15235b6b4a94b72fa8d8dc0f2a9b Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.029448 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.770785 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.797390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.873847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerStarted","Data":"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139"} Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.877577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerStarted","Data":"11c3770a298fbd38a42b575eb073a260a11e15235b6b4a94b72fa8d8dc0f2a9b"} Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.911646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerStarted","Data":"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee"} Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.911687 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerStarted","Data":"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f"} Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.912471 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.925283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" event={"ID":"da75d66a-010d-483d-b623-70707cc9af95","Type":"ContainerStarted","Data":"0703ce620a53752d9ab07623a3d432daf6170a075729f7ed2040c1d914fe4d4c"} Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.925324 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.929211 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:48 crc kubenswrapper[4768]: I1124 17:08:48.937232 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.937215848 podStartE2EDuration="2.937215848s" podCreationTimestamp="2025-11-24 17:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:48.934176732 +0000 UTC m=+1010.181145390" watchObservedRunningTime="2025-11-24 17:08:48.937215848 +0000 UTC m=+1010.184184506" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.033477 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" podStartSLOduration=3.033457358 podStartE2EDuration="3.033457358s" podCreationTimestamp="2025-11-24 17:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:49.022079657 +0000 UTC m=+1010.269048315" watchObservedRunningTime="2025-11-24 17:08:49.033457358 +0000 UTC m=+1010.280426016" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.602832 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" path="/var/lib/kubelet/pods/71fb0884-4764-4f3e-bcd7-b2227e9b24d2/volumes" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.625783 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.765911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.766260 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsdw\" (UniqueName: \"kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.766294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.767375 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.767437 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.767481 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged\") pod \"443cde2a-91e0-404e-a067-00558608d888\" (UID: \"443cde2a-91e0-404e-a067-00558608d888\") " Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.768535 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.771826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.785518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts" (OuterVolumeSpecName: "scripts") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.790728 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw" (OuterVolumeSpecName: "kube-api-access-tvsdw") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "kube-api-access-tvsdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.805543 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data" (OuterVolumeSpecName: "config-data") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.837157 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "443cde2a-91e0-404e-a067-00558608d888" (UID: "443cde2a-91e0-404e-a067-00558608d888"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869429 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869460 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/443cde2a-91e0-404e-a067-00558608d888-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869470 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869481 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsdw\" (UniqueName: \"kubernetes.io/projected/443cde2a-91e0-404e-a067-00558608d888-kube-api-access-tvsdw\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869490 4768 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/443cde2a-91e0-404e-a067-00558608d888-etc-podinfo\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.869499 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/443cde2a-91e0-404e-a067-00558608d888-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.933482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerStarted","Data":"48461bd57a3203302f690386add4e274e127a3bfb0bd4a182439b97722537e75"} Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.935485 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-jtdld" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.935536 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-jtdld" event={"ID":"443cde2a-91e0-404e-a067-00558608d888","Type":"ContainerDied","Data":"32c351f714bf154245eaf0fc9e4787762420b70fb9ae664c76939e75237d1d40"} Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.935576 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c351f714bf154245eaf0fc9e4787762420b70fb9ae664c76939e75237d1d40" Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.936912 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerStarted","Data":"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca"} Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.936997 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api-log" containerID="cri-o://f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" gracePeriod=30 Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.937080 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api" containerID="cri-o://e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" gracePeriod=30 Nov 24 17:08:49 crc kubenswrapper[4768]: I1124 17:08:49.967411 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9352618809999997 podStartE2EDuration="4.967393056s" podCreationTimestamp="2025-11-24 17:08:45 +0000 UTC" firstStartedPulling="2025-11-24 17:08:46.849755413 +0000 UTC m=+1008.096724071" lastFinishedPulling="2025-11-24 17:08:47.881886588 +0000 UTC m=+1009.128855246" observedRunningTime="2025-11-24 17:08:49.964228588 +0000 UTC m=+1011.211197246" watchObservedRunningTime="2025-11-24 17:08:49.967393056 +0000 UTC m=+1011.214361714" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.254948 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-mpnzf"] Nov 24 17:08:50 crc kubenswrapper[4768]: E1124 17:08:50.255599 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443cde2a-91e0-404e-a067-00558608d888" containerName="ironic-db-sync" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.255618 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="443cde2a-91e0-404e-a067-00558608d888" containerName="ironic-db-sync" Nov 24 17:08:50 crc kubenswrapper[4768]: E1124 17:08:50.255632 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443cde2a-91e0-404e-a067-00558608d888" containerName="init" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.255638 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="443cde2a-91e0-404e-a067-00558608d888" containerName="init" Nov 24 17:08:50 crc kubenswrapper[4768]: E1124 17:08:50.255645 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="init" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.255650 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="init" Nov 24 17:08:50 crc kubenswrapper[4768]: E1124 17:08:50.272689 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="dnsmasq-dns" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.272724 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="dnsmasq-dns" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.273043 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="71fb0884-4764-4f3e-bcd7-b2227e9b24d2" containerName="dnsmasq-dns" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.273065 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="443cde2a-91e0-404e-a067-00558608d888" containerName="ironic-db-sync" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.282160 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.291013 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-mpnzf"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.384288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.384458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hshh\" (UniqueName: \"kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.401882 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-27ba-account-create-pcz7v"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.403099 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.405726 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.434499 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-cb4d89897-bnsh5"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.435625 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.437806 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.438080 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-dockercfg-b4cxm" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.458612 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.460531 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.462197 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.463850 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.463992 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.464136 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.477052 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-27ba-account-create-pcz7v"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487711 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-config\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-combined-ca-bundle\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487821 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bpmx\" (UniqueName: \"kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgv2\" (UniqueName: \"kubernetes.io/projected/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-kube-api-access-clgv2\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hshh\" (UniqueName: \"kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.487965 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.488655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.488708 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-cb4d89897-bnsh5"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.496721 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.514924 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hshh\" (UniqueName: \"kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh\") pod \"ironic-inspector-db-create-mpnzf\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2kmw\" (UniqueName: \"kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-config\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-combined-ca-bundle\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589547 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589566 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bpmx\" (UniqueName: \"kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589606 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.589635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgv2\" (UniqueName: \"kubernetes.io/projected/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-kube-api-access-clgv2\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.590728 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.595092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-combined-ca-bundle\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.595235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-config\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.622475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgv2\" (UniqueName: \"kubernetes.io/projected/26b563bb-da9a-43fe-b201-9f77ed0d0ddd-kube-api-access-clgv2\") pod \"ironic-neutron-agent-cb4d89897-bnsh5\" (UID: \"26b563bb-da9a-43fe-b201-9f77ed0d0ddd\") " pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.631913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bpmx\" (UniqueName: \"kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx\") pod \"ironic-inspector-27ba-account-create-pcz7v\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.632294 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2kmw\" (UniqueName: \"kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693537 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.693696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.697685 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.697872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.698697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.700053 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.700274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.700901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.710175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.712929 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2kmw\" (UniqueName: \"kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw\") pod \"ironic-7fbb6d564d-76t79\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.728651 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.747019 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.760328 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.794784 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.794887 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.794912 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.794942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.794974 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.795011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.795135 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbpt\" (UniqueName: \"kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt\") pod \"72591f34-10d5-4bca-bb96-ff008193b726\" (UID: \"72591f34-10d5-4bca-bb96-ff008193b726\") " Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.800100 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt" (OuterVolumeSpecName: "kube-api-access-rfbpt") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "kube-api-access-rfbpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.800675 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.800769 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs" (OuterVolumeSpecName: "logs") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.821522 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts" (OuterVolumeSpecName: "scripts") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.828928 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.860508 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.865233 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.895546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data" (OuterVolumeSpecName: "config-data") pod "72591f34-10d5-4bca-bb96-ff008193b726" (UID: "72591f34-10d5-4bca-bb96-ff008193b726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896715 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896742 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896752 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72591f34-10d5-4bca-bb96-ff008193b726-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896760 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896772 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72591f34-10d5-4bca-bb96-ff008193b726-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896782 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbpt\" (UniqueName: \"kubernetes.io/projected/72591f34-10d5-4bca-bb96-ff008193b726-kube-api-access-rfbpt\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.896790 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72591f34-10d5-4bca-bb96-ff008193b726-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.968170 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerStarted","Data":"c82b8491d99697fa36d3ef29ba2cf87e11ffd59af9ab5d07406b3422a8efec2d"} Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.969595 4768 generic.go:334] "Generic (PLEG): container finished" podID="72591f34-10d5-4bca-bb96-ff008193b726" containerID="e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" exitCode=0 Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.969619 4768 generic.go:334] "Generic (PLEG): container finished" podID="72591f34-10d5-4bca-bb96-ff008193b726" containerID="f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" exitCode=143 Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.970306 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.973291 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerDied","Data":"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee"} Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.973406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerDied","Data":"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f"} Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.973420 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"72591f34-10d5-4bca-bb96-ff008193b726","Type":"ContainerDied","Data":"dfb379a64d9911ab54c9fef2141806ff2c574e8aa779b4c85f7c6a919768bd95"} Nov 24 17:08:50 crc kubenswrapper[4768]: I1124 17:08:50.973457 4768 scope.go:117] "RemoveContainer" containerID="e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.009247 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.016609 4768 scope.go:117] "RemoveContainer" containerID="f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.021422 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.025636 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: E1124 17:08:51.026009 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api-log" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.026026 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api-log" Nov 24 17:08:51 crc kubenswrapper[4768]: E1124 17:08:51.026035 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.026041 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.026197 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api-log" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.026217 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="72591f34-10d5-4bca-bb96-ff008193b726" containerName="cinder-api" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.027154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.030307 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.030564 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.030673 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.075650 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.101512 4768 scope.go:117] "RemoveContainer" containerID="e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" Nov 24 17:08:51 crc kubenswrapper[4768]: E1124 17:08:51.101928 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee\": container with ID starting with e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee not found: ID does not exist" containerID="e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.101955 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee"} err="failed to get container status \"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee\": rpc error: code = NotFound desc = could not find container \"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee\": container with ID starting with e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee not found: ID does not exist" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.101974 4768 scope.go:117] "RemoveContainer" containerID="f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.102934 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.102974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.103008 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.103896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.103923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-logs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pjn\" (UniqueName: \"kubernetes.io/projected/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-kube-api-access-j9pjn\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: E1124 17:08:51.104307 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f\": container with ID starting with f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f not found: ID does not exist" containerID="f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104447 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f"} err="failed to get container status \"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f\": rpc error: code = NotFound desc = could not find container \"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f\": container with ID starting with f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f not found: ID does not exist" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104472 4768 scope.go:117] "RemoveContainer" containerID="e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104426 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.104607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-scripts\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.112050 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee"} err="failed to get container status \"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee\": rpc error: code = NotFound desc = could not find container \"e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee\": container with ID starting with e756a4270e3d30e08bf785a36dbc2ded321d51fb6f70e100310794af7f4809ee not found: ID does not exist" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.112089 4768 scope.go:117] "RemoveContainer" containerID="f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.129381 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f"} err="failed to get container status \"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f\": rpc error: code = NotFound desc = could not find container \"f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f\": container with ID starting with f82338da5341abf83c6f9526bff9627abbc2e950fa39f162b7161106406d9a7f not found: ID does not exist" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206322 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9pjn\" (UniqueName: \"kubernetes.io/projected/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-kube-api-access-j9pjn\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206406 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-scripts\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206508 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206534 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206550 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.206568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-logs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.207005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-logs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.208105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.214332 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-mpnzf"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.219245 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.219782 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.220209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.220706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-scripts\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.221068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-config-data\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.229234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.244172 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9pjn\" (UniqueName: \"kubernetes.io/projected/8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3-kube-api-access-j9pjn\") pod \"cinder-api-0\" (UID: \"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3\") " pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.314758 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.317496 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.322549 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.330636 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.334531 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.335707 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.341275 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.519753 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520051 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-scripts\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk4sz\" (UniqueName: \"kubernetes.io/projected/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-kube-api-access-gk4sz\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.520275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.617237 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72591f34-10d5-4bca-bb96-ff008193b726" path="/var/lib/kubelet/pods/72591f34-10d5-4bca-bb96-ff008193b726/volumes" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622275 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk4sz\" (UniqueName: \"kubernetes.io/projected/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-kube-api-access-gk4sz\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-scripts\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.622496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.625524 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.630167 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.642310 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-scripts\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.647243 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.650619 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk4sz\" (UniqueName: \"kubernetes.io/projected/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-kube-api-access-gk4sz\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.650854 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.651082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.651304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.791478 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.819227 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-cb4d89897-bnsh5"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.841636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ironic-conductor-0\" (UID: \"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd\") " pod="openstack/ironic-conductor-0" Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.847568 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-27ba-account-create-pcz7v"] Nov 24 17:08:51 crc kubenswrapper[4768]: I1124 17:08:51.908839 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.039494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" event={"ID":"f75aecba-ed47-439f-80f3-3e435c38a8c6","Type":"ContainerStarted","Data":"e28f5b70907207cd2685bd875a8112a4282434fae633206c5446786e2564a929"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.040681 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerStarted","Data":"149006c0a9c028750962d6d0bba3abd8d034c7ced8efabd927da2e840c153d65"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.041724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mpnzf" event={"ID":"bd6be468-91c4-4bd5-8f6c-54396782c17f","Type":"ContainerStarted","Data":"f81dda92d7320acf88a0d118934af6ca1fa5430ab04223544b5a0183de5f4ec7"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.041743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mpnzf" event={"ID":"bd6be468-91c4-4bd5-8f6c-54396782c17f","Type":"ContainerStarted","Data":"f5aeca4f0bf6f1763403540311bed56efffd95b63ac001f799141688004aaffa"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.067949 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerStarted","Data":"1cd7de69543d651a2a45bbd623bc728228339fc97dab37f77d476cd575ab7292"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.081200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerStarted","Data":"eb82a4a38dbdb2812542c7814c8c830b6dfd5ebfa1f414ba33d14008fccaf6cb"} Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.179869 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-mpnzf" podStartSLOduration=2.179848225 podStartE2EDuration="2.179848225s" podCreationTimestamp="2025-11-24 17:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:52.062967661 +0000 UTC m=+1013.309936319" watchObservedRunningTime="2025-11-24 17:08:52.179848225 +0000 UTC m=+1013.426816883" Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.182652 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.495391 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.644565 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:52 crc kubenswrapper[4768]: I1124 17:08:52.949944 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.098420 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"e347b511e10205e8ca7b86b70801232ef3446d80ec8f1e74514e513797dae357"} Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.109539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3","Type":"ContainerStarted","Data":"5c4801002c57bf9e7c099e492dd58e16fced595a73cfc31248b52f82e52cdd24"} Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.113889 4768 generic.go:334] "Generic (PLEG): container finished" podID="f75aecba-ed47-439f-80f3-3e435c38a8c6" containerID="463792b969d5018114a7b1086ada91bc0bf823e4022091dd4fb5552de922a214" exitCode=0 Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.113964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" event={"ID":"f75aecba-ed47-439f-80f3-3e435c38a8c6","Type":"ContainerDied","Data":"463792b969d5018114a7b1086ada91bc0bf823e4022091dd4fb5552de922a214"} Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.124816 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd6be468-91c4-4bd5-8f6c-54396782c17f" containerID="f81dda92d7320acf88a0d118934af6ca1fa5430ab04223544b5a0183de5f4ec7" exitCode=0 Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.124866 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mpnzf" event={"ID":"bd6be468-91c4-4bd5-8f6c-54396782c17f","Type":"ContainerDied","Data":"f81dda92d7320acf88a0d118934af6ca1fa5430ab04223544b5a0183de5f4ec7"} Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.407666 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-798b498bb4-66crl"] Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.410597 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.427520 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.434536 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.442897 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-798b498bb4-66crl"] Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.465703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-public-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.465859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvtw\" (UniqueName: \"kubernetes.io/projected/194cfeda-1348-4917-bb28-8cde275f7caa-kube-api-access-vlvtw\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.465888 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-scripts\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.465939 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-combined-ca-bundle\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.465970 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-logs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.466042 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-merged\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.466105 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/194cfeda-1348-4917-bb28-8cde275f7caa-etc-podinfo\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.466122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-custom\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.466140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.466200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-internal-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567240 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvtw\" (UniqueName: \"kubernetes.io/projected/194cfeda-1348-4917-bb28-8cde275f7caa-kube-api-access-vlvtw\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-scripts\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567315 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-combined-ca-bundle\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-logs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-merged\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/194cfeda-1348-4917-bb28-8cde275f7caa-etc-podinfo\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567467 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-custom\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-internal-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.567554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-public-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.568731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-logs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.569117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-merged\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.572007 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-internal-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.573815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/194cfeda-1348-4917-bb28-8cde275f7caa-etc-podinfo\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.573820 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-combined-ca-bundle\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.574060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.579597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-scripts\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.582050 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-config-data-custom\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.583133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvtw\" (UniqueName: \"kubernetes.io/projected/194cfeda-1348-4917-bb28-8cde275f7caa-kube-api-access-vlvtw\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.595897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/194cfeda-1348-4917-bb28-8cde275f7caa-public-tls-certs\") pod \"ironic-798b498bb4-66crl\" (UID: \"194cfeda-1348-4917-bb28-8cde275f7caa\") " pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:53 crc kubenswrapper[4768]: I1124 17:08:53.804043 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.093892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-54d9965d5d-g2r7n" Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.153432 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.153710 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86669456c4-fp95m" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api-log" containerID="cri-o://135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c" gracePeriod=30 Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.153957 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86669456c4-fp95m" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api" containerID="cri-o://569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d" gracePeriod=30 Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.171387 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd" containerID="73ba2a20a7e432889b06299fa535af9a6b2664c07fa29d99f65aaee3c6598723" exitCode=0 Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.171507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerDied","Data":"73ba2a20a7e432889b06299fa535af9a6b2664c07fa29d99f65aaee3c6598723"} Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.194007 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3","Type":"ContainerStarted","Data":"5fa21b08fac3eaf5fa80e9aa9afe8e802d333a375aa8317773ceba70e91f7a32"} Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.194052 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3","Type":"ContainerStarted","Data":"beaa9a577d0bc72bc4c98639405eedcacc8956304cb535595a64c254bc7f9b2a"} Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.194065 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 17:08:54 crc kubenswrapper[4768]: I1124 17:08:54.246425 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.246407338 podStartE2EDuration="3.246407338s" podCreationTimestamp="2025-11-24 17:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:54.238785613 +0000 UTC m=+1015.485754271" watchObservedRunningTime="2025-11-24 17:08:54.246407338 +0000 UTC m=+1015.493375996" Nov 24 17:08:54 crc kubenswrapper[4768]: E1124 17:08:54.797297 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice/crio-b2dd97c3ec05ee6fde7de0529d5becf3d88de483be8aaecf84ccad322a66c99c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b5ab834_a98f_4ace_a22f_cde15ebf7f4b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.065238 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.159599 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bpmx\" (UniqueName: \"kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx\") pod \"f75aecba-ed47-439f-80f3-3e435c38a8c6\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.159655 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts\") pod \"f75aecba-ed47-439f-80f3-3e435c38a8c6\" (UID: \"f75aecba-ed47-439f-80f3-3e435c38a8c6\") " Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.161524 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f75aecba-ed47-439f-80f3-3e435c38a8c6" (UID: "f75aecba-ed47-439f-80f3-3e435c38a8c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.169511 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx" (OuterVolumeSpecName: "kube-api-access-5bpmx") pod "f75aecba-ed47-439f-80f3-3e435c38a8c6" (UID: "f75aecba-ed47-439f-80f3-3e435c38a8c6"). InnerVolumeSpecName "kube-api-access-5bpmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.185190 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c9b47fdf7-ztl8b" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.231153 4768 generic.go:334] "Generic (PLEG): container finished" podID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerID="135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c" exitCode=143 Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.231218 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerDied","Data":"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c"} Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.243997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" event={"ID":"f75aecba-ed47-439f-80f3-3e435c38a8c6","Type":"ContainerDied","Data":"e28f5b70907207cd2685bd875a8112a4282434fae633206c5446786e2564a929"} Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.244045 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28f5b70907207cd2685bd875a8112a4282434fae633206c5446786e2564a929" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.244103 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-27ba-account-create-pcz7v" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.265791 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bpmx\" (UniqueName: \"kubernetes.io/projected/f75aecba-ed47-439f-80f3-3e435c38a8c6-kube-api-access-5bpmx\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.265830 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f75aecba-ed47-439f-80f3-3e435c38a8c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.278585 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.278829 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58cbfb7868-t7r6m" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-api" containerID="cri-o://bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7" gracePeriod=30 Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.278977 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58cbfb7868-t7r6m" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-httpd" containerID="cri-o://bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad" gracePeriod=30 Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.300541 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.301291 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mpnzf" event={"ID":"bd6be468-91c4-4bd5-8f6c-54396782c17f","Type":"ContainerDied","Data":"f5aeca4f0bf6f1763403540311bed56efffd95b63ac001f799141688004aaffa"} Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.301317 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5aeca4f0bf6f1763403540311bed56efffd95b63ac001f799141688004aaffa" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.471252 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts\") pod \"bd6be468-91c4-4bd5-8f6c-54396782c17f\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.471786 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hshh\" (UniqueName: \"kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh\") pod \"bd6be468-91c4-4bd5-8f6c-54396782c17f\" (UID: \"bd6be468-91c4-4bd5-8f6c-54396782c17f\") " Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.474255 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd6be468-91c4-4bd5-8f6c-54396782c17f" (UID: "bd6be468-91c4-4bd5-8f6c-54396782c17f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.483832 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh" (OuterVolumeSpecName: "kube-api-access-6hshh") pod "bd6be468-91c4-4bd5-8f6c-54396782c17f" (UID: "bd6be468-91c4-4bd5-8f6c-54396782c17f"). InnerVolumeSpecName "kube-api-access-6hshh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.576277 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hshh\" (UniqueName: \"kubernetes.io/projected/bd6be468-91c4-4bd5-8f6c-54396782c17f-kube-api-access-6hshh\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.576325 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd6be468-91c4-4bd5-8f6c-54396782c17f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:55 crc kubenswrapper[4768]: I1124 17:08:55.814942 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-798b498bb4-66crl"] Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.313539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerStarted","Data":"3d874cb5feed233ae5cf3ba66bb471ca897ed04accc768645ef5610be2b4c2e1"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.313982 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.320807 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerID="4ecca148c2d11b37a7385cb899769f1734736f4a513982d8a2a490123e1b165a" exitCode=0 Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.320853 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerDied","Data":"4ecca148c2d11b37a7385cb899769f1734736f4a513982d8a2a490123e1b165a"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.335721 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-798b498bb4-66crl" event={"ID":"194cfeda-1348-4917-bb28-8cde275f7caa","Type":"ContainerStarted","Data":"9a4ab2c26cfa1c50c37babcc64bea5d5ac8d2edc21cdd91730d203badd81159c"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.335792 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-798b498bb4-66crl" event={"ID":"194cfeda-1348-4917-bb28-8cde275f7caa","Type":"ContainerStarted","Data":"2d7c2d9731c889a59fe501872ee859c69c4d91de98d4b0a0be2ef8779de74f53"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.343862 4768 generic.go:334] "Generic (PLEG): container finished" podID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerID="bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad" exitCode=0 Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.344018 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerDied","Data":"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.349088 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mpnzf" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.352081 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerStarted","Data":"e4e7503b35ab35d227f5ca612c048a38814b040dcdbc4bebdc12278788e4d610"} Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.352801 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.366098 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.049683129 podStartE2EDuration="9.366074714s" podCreationTimestamp="2025-11-24 17:08:47 +0000 UTC" firstStartedPulling="2025-11-24 17:08:48.040013677 +0000 UTC m=+1009.286982335" lastFinishedPulling="2025-11-24 17:08:53.356405262 +0000 UTC m=+1014.603373920" observedRunningTime="2025-11-24 17:08:56.335069678 +0000 UTC m=+1017.582038336" watchObservedRunningTime="2025-11-24 17:08:56.366074714 +0000 UTC m=+1017.613043372" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.464698 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" podStartSLOduration=3.090337272 podStartE2EDuration="6.464676771s" podCreationTimestamp="2025-11-24 17:08:50 +0000 UTC" firstStartedPulling="2025-11-24 17:08:51.795744998 +0000 UTC m=+1013.042713656" lastFinishedPulling="2025-11-24 17:08:55.170084497 +0000 UTC m=+1016.417053155" observedRunningTime="2025-11-24 17:08:56.455496102 +0000 UTC m=+1017.702464760" watchObservedRunningTime="2025-11-24 17:08:56.464676771 +0000 UTC m=+1017.711645429" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.489715 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.556779 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.557178 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="dnsmasq-dns" containerID="cri-o://00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc" gracePeriod=10 Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.809926 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 17:08:56 crc kubenswrapper[4768]: I1124 17:08:56.874861 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.127764 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.237537 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.237670 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.237755 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8f5\" (UniqueName: \"kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.237995 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.238065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.238164 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config\") pod \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\" (UID: \"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.264059 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5" (OuterVolumeSpecName: "kube-api-access-ln8f5") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "kube-api-access-ln8f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.305222 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config" (OuterVolumeSpecName: "config") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.320955 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.334662 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.337400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.337885 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" (UID: "fc2ddf52-e603-44e4-a5ef-aa85afdc7c26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340751 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8f5\" (UniqueName: \"kubernetes.io/projected/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-kube-api-access-ln8f5\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340783 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340794 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340803 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340811 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.340822 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.375098 4768 generic.go:334] "Generic (PLEG): container finished" podID="194cfeda-1348-4917-bb28-8cde275f7caa" containerID="9a4ab2c26cfa1c50c37babcc64bea5d5ac8d2edc21cdd91730d203badd81159c" exitCode=0 Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.375213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-798b498bb4-66crl" event={"ID":"194cfeda-1348-4917-bb28-8cde275f7caa","Type":"ContainerDied","Data":"9a4ab2c26cfa1c50c37babcc64bea5d5ac8d2edc21cdd91730d203badd81159c"} Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.383034 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerID="a24dc458d39cee6eb47bba5bd5b61d1654fd37f0d2217d2958a53c7b3207b4a0" exitCode=1 Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.383250 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerDied","Data":"a24dc458d39cee6eb47bba5bd5b61d1654fd37f0d2217d2958a53c7b3207b4a0"} Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.383296 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerStarted","Data":"0f2a40034662ac5a1c46ee72d62e06db4d556b4bf8a178f3af2b83d5a163eea6"} Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.384373 4768 scope.go:117] "RemoveContainer" containerID="a24dc458d39cee6eb47bba5bd5b61d1654fd37f0d2217d2958a53c7b3207b4a0" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.388536 4768 generic.go:334] "Generic (PLEG): container finished" podID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerID="00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc" exitCode=0 Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.389123 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.390991 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" event={"ID":"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26","Type":"ContainerDied","Data":"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc"} Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.391037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-79ljq" event={"ID":"fc2ddf52-e603-44e4-a5ef-aa85afdc7c26","Type":"ContainerDied","Data":"374c53fd5808380014ea96c88a695b3f44a05b46e3c69df43614f24b37b9e50d"} Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.391062 4768 scope.go:117] "RemoveContainer" containerID="00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.391300 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="probe" containerID="cri-o://0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca" gracePeriod=30 Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.391312 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="cinder-scheduler" containerID="cri-o://d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139" gracePeriod=30 Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.420971 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-86669456c4-fp95m" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:49686->10.217.0.157:9311: read: connection reset by peer" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.421057 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-86669456c4-fp95m" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:49682->10.217.0.157:9311: read: connection reset by peer" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.453406 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.453754 4768 scope.go:117] "RemoveContainer" containerID="f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.459606 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-79ljq"] Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.520056 4768 scope.go:117] "RemoveContainer" containerID="00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc" Nov 24 17:08:57 crc kubenswrapper[4768]: E1124 17:08:57.520455 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc\": container with ID starting with 00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc not found: ID does not exist" containerID="00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.520506 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc"} err="failed to get container status \"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc\": rpc error: code = NotFound desc = could not find container \"00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc\": container with ID starting with 00c38ac23f922d3c5fe21bf2403d93bc3258218f419681d8268fae50f31cd7bc not found: ID does not exist" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.520532 4768 scope.go:117] "RemoveContainer" containerID="f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b" Nov 24 17:08:57 crc kubenswrapper[4768]: E1124 17:08:57.520849 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b\": container with ID starting with f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b not found: ID does not exist" containerID="f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.520902 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b"} err="failed to get container status \"f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b\": rpc error: code = NotFound desc = could not find container \"f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b\": container with ID starting with f2477913873c41eb08ebd465ead580fccabbd35c71072abf31917a3dd882322b not found: ID does not exist" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.594484 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" path="/var/lib/kubelet/pods/fc2ddf52-e603-44e4-a5ef-aa85afdc7c26/volumes" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.833821 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.959548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle\") pod \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.959595 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d868t\" (UniqueName: \"kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t\") pod \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.959649 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data\") pod \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.959667 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom\") pod \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.959780 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs\") pod \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\" (UID: \"89f2a026-39af-4e20-bdf8-82ab0ace0d4e\") " Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.960684 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs" (OuterVolumeSpecName: "logs") pod "89f2a026-39af-4e20-bdf8-82ab0ace0d4e" (UID: "89f2a026-39af-4e20-bdf8-82ab0ace0d4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.964691 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "89f2a026-39af-4e20-bdf8-82ab0ace0d4e" (UID: "89f2a026-39af-4e20-bdf8-82ab0ace0d4e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.965011 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t" (OuterVolumeSpecName: "kube-api-access-d868t") pod "89f2a026-39af-4e20-bdf8-82ab0ace0d4e" (UID: "89f2a026-39af-4e20-bdf8-82ab0ace0d4e"). InnerVolumeSpecName "kube-api-access-d868t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:08:57 crc kubenswrapper[4768]: I1124 17:08:57.986070 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89f2a026-39af-4e20-bdf8-82ab0ace0d4e" (UID: "89f2a026-39af-4e20-bdf8-82ab0ace0d4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.014299 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data" (OuterVolumeSpecName: "config-data") pod "89f2a026-39af-4e20-bdf8-82ab0ace0d4e" (UID: "89f2a026-39af-4e20-bdf8-82ab0ace0d4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.061630 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.061999 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d868t\" (UniqueName: \"kubernetes.io/projected/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-kube-api-access-d868t\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.062012 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.062022 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.062031 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89f2a026-39af-4e20-bdf8-82ab0ace0d4e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.404807 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerID="575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5" exitCode=1 Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.404876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerDied","Data":"575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.404906 4768 scope.go:117] "RemoveContainer" containerID="a24dc458d39cee6eb47bba5bd5b61d1654fd37f0d2217d2958a53c7b3207b4a0" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.407843 4768 scope.go:117] "RemoveContainer" containerID="575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5" Nov 24 17:08:58 crc kubenswrapper[4768]: E1124 17:08:58.408198 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7fbb6d564d-76t79_openstack(e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74)\"" pod="openstack/ironic-7fbb6d564d-76t79" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.414430 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-798b498bb4-66crl" event={"ID":"194cfeda-1348-4917-bb28-8cde275f7caa","Type":"ContainerStarted","Data":"cc16b010515f267663e719578ae3097683ec018a60747f6568a801cd5c94cb19"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.414488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-798b498bb4-66crl" event={"ID":"194cfeda-1348-4917-bb28-8cde275f7caa","Type":"ContainerStarted","Data":"f8bd7d769ba27d10ea7b2210d53436daddfa2e3ecb50cde161a8dbcb7395f91c"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.414601 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.417693 4768 generic.go:334] "Generic (PLEG): container finished" podID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerID="569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d" exitCode=0 Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.417736 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86669456c4-fp95m" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.417770 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerDied","Data":"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.417810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86669456c4-fp95m" event={"ID":"89f2a026-39af-4e20-bdf8-82ab0ace0d4e","Type":"ContainerDied","Data":"df72553b10729b293fc79189ea00e00441d3d10f6abb484150add5a7fc777dac"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.420036 4768 generic.go:334] "Generic (PLEG): container finished" podID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerID="0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca" exitCode=0 Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.420078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerDied","Data":"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca"} Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.461905 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-798b498bb4-66crl" podStartSLOduration=5.461887784 podStartE2EDuration="5.461887784s" podCreationTimestamp="2025-11-24 17:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:08:58.459329792 +0000 UTC m=+1019.706298450" watchObservedRunningTime="2025-11-24 17:08:58.461887784 +0000 UTC m=+1019.708856442" Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.487248 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:58 crc kubenswrapper[4768]: I1124 17:08:58.495295 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-86669456c4-fp95m"] Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.267219 4768 scope.go:117] "RemoveContainer" containerID="569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.288701 4768 scope.go:117] "RemoveContainer" containerID="135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.318514 4768 scope.go:117] "RemoveContainer" containerID="569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d" Nov 24 17:08:59 crc kubenswrapper[4768]: E1124 17:08:59.319655 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d\": container with ID starting with 569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d not found: ID does not exist" containerID="569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.319703 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d"} err="failed to get container status \"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d\": rpc error: code = NotFound desc = could not find container \"569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d\": container with ID starting with 569a15f401c350856f5bf6789ec911c19577741448c1d279564244822de0223d not found: ID does not exist" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.319726 4768 scope.go:117] "RemoveContainer" containerID="135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c" Nov 24 17:08:59 crc kubenswrapper[4768]: E1124 17:08:59.320230 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c\": container with ID starting with 135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c not found: ID does not exist" containerID="135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.320254 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c"} err="failed to get container status \"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c\": rpc error: code = NotFound desc = could not find container \"135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c\": container with ID starting with 135af4d4183f6baa4cfb68f1b673b7d7a4aae454e9c7aa5993cfc0bcd2e8563c not found: ID does not exist" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.435865 4768 scope.go:117] "RemoveContainer" containerID="575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5" Nov 24 17:08:59 crc kubenswrapper[4768]: E1124 17:08:59.436308 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7fbb6d564d-76t79_openstack(e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74)\"" pod="openstack/ironic-7fbb6d564d-76t79" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.438180 4768 generic.go:334] "Generic (PLEG): container finished" podID="26b563bb-da9a-43fe-b201-9f77ed0d0ddd" containerID="e4e7503b35ab35d227f5ca612c048a38814b040dcdbc4bebdc12278788e4d610" exitCode=1 Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.438248 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerDied","Data":"e4e7503b35ab35d227f5ca612c048a38814b040dcdbc4bebdc12278788e4d610"} Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.439186 4768 scope.go:117] "RemoveContainer" containerID="e4e7503b35ab35d227f5ca612c048a38814b040dcdbc4bebdc12278788e4d610" Nov 24 17:08:59 crc kubenswrapper[4768]: I1124 17:08:59.604250 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" path="/var/lib/kubelet/pods/89f2a026-39af-4e20-bdf8-82ab0ace0d4e/volumes" Nov 24 17:09:00 crc kubenswrapper[4768]: I1124 17:09:00.454165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerStarted","Data":"b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf"} Nov 24 17:09:00 crc kubenswrapper[4768]: I1124 17:09:00.455203 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:09:00 crc kubenswrapper[4768]: I1124 17:09:00.861287 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:09:00 crc kubenswrapper[4768]: I1124 17:09:00.861486 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:09:00 crc kubenswrapper[4768]: I1124 17:09:00.862495 4768 scope.go:117] "RemoveContainer" containerID="575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5" Nov 24 17:09:00 crc kubenswrapper[4768]: E1124 17:09:00.862956 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-7fbb6d564d-76t79_openstack(e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74)\"" pod="openstack/ironic-7fbb6d564d-76t79" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.077219 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.230938 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.230994 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.231041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.231073 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.231098 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.231183 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pvb\" (UniqueName: \"kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb\") pod \"a5366f3a-38c2-40ba-b778-e7487762f88e\" (UID: \"a5366f3a-38c2-40ba-b778-e7487762f88e\") " Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.232748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.238268 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts" (OuterVolumeSpecName: "scripts") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.238773 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.259081 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb" (OuterVolumeSpecName: "kube-api-access-t9pvb") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "kube-api-access-t9pvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.311254 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.333534 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.333566 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.333576 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pvb\" (UniqueName: \"kubernetes.io/projected/a5366f3a-38c2-40ba-b778-e7487762f88e-kube-api-access-t9pvb\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.333585 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.333593 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5366f3a-38c2-40ba-b778-e7487762f88e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.395076 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data" (OuterVolumeSpecName: "config-data") pod "a5366f3a-38c2-40ba-b778-e7487762f88e" (UID: "a5366f3a-38c2-40ba-b778-e7487762f88e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.435792 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5366f3a-38c2-40ba-b778-e7487762f88e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.467551 4768 generic.go:334] "Generic (PLEG): container finished" podID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerID="d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139" exitCode=0 Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.468127 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.468588 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerDied","Data":"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139"} Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.468634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5366f3a-38c2-40ba-b778-e7487762f88e","Type":"ContainerDied","Data":"9899ec8ef7f986bd76113ca114eb570c34ff5e90faa9ef3de83516ef54ebefed"} Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.468651 4768 scope.go:117] "RemoveContainer" containerID="0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.497808 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.507440 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.516735 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517378 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="probe" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517394 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="probe" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517403 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6be468-91c4-4bd5-8f6c-54396782c17f" containerName="mariadb-database-create" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517409 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6be468-91c4-4bd5-8f6c-54396782c17f" containerName="mariadb-database-create" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517423 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="dnsmasq-dns" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517430 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="dnsmasq-dns" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517441 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api-log" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517447 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api-log" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517465 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="init" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517472 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="init" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517493 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="cinder-scheduler" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517500 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="cinder-scheduler" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517513 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.517523 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75aecba-ed47-439f-80f3-3e435c38a8c6" containerName="mariadb-account-create" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517530 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75aecba-ed47-439f-80f3-3e435c38a8c6" containerName="mariadb-account-create" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517701 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2ddf52-e603-44e4-a5ef-aa85afdc7c26" containerName="dnsmasq-dns" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517713 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="cinder-scheduler" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517725 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517738 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75aecba-ed47-439f-80f3-3e435c38a8c6" containerName="mariadb-account-create" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517749 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f2a026-39af-4e20-bdf8-82ab0ace0d4e" containerName="barbican-api-log" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517759 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6be468-91c4-4bd5-8f6c-54396782c17f" containerName="mariadb-database-create" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.517766 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" containerName="probe" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.518690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.520347 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.523630 4768 scope.go:117] "RemoveContainer" containerID="d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.528883 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538021 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b85390f-acde-4350-8c18-1f588ffa8ab5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538057 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fr5c\" (UniqueName: \"kubernetes.io/projected/3b85390f-acde-4350-8c18-1f588ffa8ab5-kube-api-access-4fr5c\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.538342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.559034 4768 scope.go:117] "RemoveContainer" containerID="0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.560454 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca\": container with ID starting with 0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca not found: ID does not exist" containerID="0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.560499 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca"} err="failed to get container status \"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca\": rpc error: code = NotFound desc = could not find container \"0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca\": container with ID starting with 0cc9adff721cd63a4bef3301ce6115c376cd7859e74f4e1592f43e9532d811ca not found: ID does not exist" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.560525 4768 scope.go:117] "RemoveContainer" containerID="d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139" Nov 24 17:09:01 crc kubenswrapper[4768]: E1124 17:09:01.560968 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139\": container with ID starting with d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139 not found: ID does not exist" containerID="d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.560998 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139"} err="failed to get container status \"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139\": rpc error: code = NotFound desc = could not find container \"d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139\": container with ID starting with d46b684c626091d9326b1c6b0af2f9a7254f9cd5e6d578cd7443b22048714139 not found: ID does not exist" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.592916 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5366f3a-38c2-40ba-b778-e7487762f88e" path="/var/lib/kubelet/pods/a5366f3a-38c2-40ba-b778-e7487762f88e/volumes" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.639861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b85390f-acde-4350-8c18-1f588ffa8ab5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.639930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.639962 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fr5c\" (UniqueName: \"kubernetes.io/projected/3b85390f-acde-4350-8c18-1f588ffa8ab5-kube-api-access-4fr5c\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.639995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.640085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.640151 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.641775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b85390f-acde-4350-8c18-1f588ffa8ab5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.644390 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.644577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-scripts\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.644901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.648992 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b85390f-acde-4350-8c18-1f588ffa8ab5-config-data\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.662243 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fr5c\" (UniqueName: \"kubernetes.io/projected/3b85390f-acde-4350-8c18-1f588ffa8ab5-kube-api-access-4fr5c\") pod \"cinder-scheduler-0\" (UID: \"3b85390f-acde-4350-8c18-1f588ffa8ab5\") " pod="openstack/cinder-scheduler-0" Nov 24 17:09:01 crc kubenswrapper[4768]: I1124 17:09:01.838163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.325768 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 17:09:02 crc kubenswrapper[4768]: W1124 17:09:02.332659 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b85390f_acde_4350_8c18_1f588ffa8ab5.slice/crio-9c3315bb05242bb533f5514ee4a492dc12c93ed849c4dfc96b50b42a2b694a37 WatchSource:0}: Error finding container 9c3315bb05242bb533f5514ee4a492dc12c93ed849c4dfc96b50b42a2b694a37: Status 404 returned error can't find the container with id 9c3315bb05242bb533f5514ee4a492dc12c93ed849c4dfc96b50b42a2b694a37 Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.477402 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3b85390f-acde-4350-8c18-1f588ffa8ab5","Type":"ContainerStarted","Data":"9c3315bb05242bb533f5514ee4a492dc12c93ed849c4dfc96b50b42a2b694a37"} Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.479037 4768 generic.go:334] "Generic (PLEG): container finished" podID="26b563bb-da9a-43fe-b201-9f77ed0d0ddd" containerID="b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf" exitCode=1 Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.479213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerDied","Data":"b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf"} Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.479446 4768 scope.go:117] "RemoveContainer" containerID="e4e7503b35ab35d227f5ca612c048a38814b040dcdbc4bebdc12278788e4d610" Nov 24 17:09:02 crc kubenswrapper[4768]: I1124 17:09:02.479963 4768 scope.go:117] "RemoveContainer" containerID="b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf" Nov 24 17:09:02 crc kubenswrapper[4768]: E1124 17:09:02.480248 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-cb4d89897-bnsh5_openstack(26b563bb-da9a-43fe-b201-9f77ed0d0ddd)\"" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" podUID="26b563bb-da9a-43fe-b201-9f77ed0d0ddd" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.145719 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.425787 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.498622 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3b85390f-acde-4350-8c18-1f588ffa8ab5","Type":"ContainerStarted","Data":"d14d63b64d7a8d14aac91a0bd1a2c9da622a7609e8215e0f067033da30d61329"} Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.519028 4768 generic.go:334] "Generic (PLEG): container finished" podID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerID="bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7" exitCode=0 Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.519092 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerDied","Data":"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7"} Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.519119 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58cbfb7868-t7r6m" event={"ID":"9038e5e4-2985-4de6-b6d5-e16d170d38d8","Type":"ContainerDied","Data":"a97baba026709b249dcc0efe341b245bb72a02d046cceefd582f62e2776194ff"} Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.519133 4768 scope.go:117] "RemoveContainer" containerID="bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.519246 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58cbfb7868-t7r6m" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.559518 4768 scope.go:117] "RemoveContainer" containerID="bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.590602 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tpjw\" (UniqueName: \"kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw\") pod \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.590688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle\") pod \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.590716 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config\") pod \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.590791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config\") pod \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.590811 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs\") pod \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\" (UID: \"9038e5e4-2985-4de6-b6d5-e16d170d38d8\") " Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.597500 4768 scope.go:117] "RemoveContainer" containerID="bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad" Nov 24 17:09:03 crc kubenswrapper[4768]: E1124 17:09:03.598918 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad\": container with ID starting with bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad not found: ID does not exist" containerID="bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.598952 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad"} err="failed to get container status \"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad\": rpc error: code = NotFound desc = could not find container \"bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad\": container with ID starting with bb8c77a23e3a6dd0ed445a7f567c1437066a8775c25686a167c462424342e5ad not found: ID does not exist" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.598972 4768 scope.go:117] "RemoveContainer" containerID="bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7" Nov 24 17:09:03 crc kubenswrapper[4768]: E1124 17:09:03.599315 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7\": container with ID starting with bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7 not found: ID does not exist" containerID="bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.599335 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7"} err="failed to get container status \"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7\": rpc error: code = NotFound desc = could not find container \"bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7\": container with ID starting with bbb3d5dc503f42663a282cf67f51a144f5843fcddc9e3ddeca03bd95bdc723a7 not found: ID does not exist" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.602699 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw" (OuterVolumeSpecName: "kube-api-access-2tpjw") pod "9038e5e4-2985-4de6-b6d5-e16d170d38d8" (UID: "9038e5e4-2985-4de6-b6d5-e16d170d38d8"). InnerVolumeSpecName "kube-api-access-2tpjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.606673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9038e5e4-2985-4de6-b6d5-e16d170d38d8" (UID: "9038e5e4-2985-4de6-b6d5-e16d170d38d8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.691814 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9038e5e4-2985-4de6-b6d5-e16d170d38d8" (UID: "9038e5e4-2985-4de6-b6d5-e16d170d38d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.693032 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tpjw\" (UniqueName: \"kubernetes.io/projected/9038e5e4-2985-4de6-b6d5-e16d170d38d8-kube-api-access-2tpjw\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.693078 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.693088 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.724060 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9038e5e4-2985-4de6-b6d5-e16d170d38d8" (UID: "9038e5e4-2985-4de6-b6d5-e16d170d38d8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.726168 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config" (OuterVolumeSpecName: "config") pod "9038e5e4-2985-4de6-b6d5-e16d170d38d8" (UID: "9038e5e4-2985-4de6-b6d5-e16d170d38d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.796636 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.796684 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9038e5e4-2985-4de6-b6d5-e16d170d38d8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.857788 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:09:03 crc kubenswrapper[4768]: I1124 17:09:03.868133 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58cbfb7868-t7r6m"] Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.210224 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.216090 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58f546f576-kqv27" Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.540616 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3b85390f-acde-4350-8c18-1f588ffa8ab5","Type":"ContainerStarted","Data":"0c131ae4fa919e95f768d8aa4366d9c64eec148df1450d9cf4141f61aa891efa"} Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.560856 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.560817508 podStartE2EDuration="3.560817508s" podCreationTimestamp="2025-11-24 17:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:09:04.558128942 +0000 UTC m=+1025.805097600" watchObservedRunningTime="2025-11-24 17:09:04.560817508 +0000 UTC m=+1025.807786166" Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.855517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74667f8554-ph5sd" Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.894116 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:09:04 crc kubenswrapper[4768]: I1124 17:09:04.894165 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.183742 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-798b498bb4-66crl" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.257772 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.258045 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-7fbb6d564d-76t79" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api-log" containerID="cri-o://0f2a40034662ac5a1c46ee72d62e06db4d556b4bf8a178f3af2b83d5a163eea6" gracePeriod=60 Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.559721 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-hk9hx"] Nov 24 17:09:05 crc kubenswrapper[4768]: E1124 17:09:05.560036 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-api" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.560047 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-api" Nov 24 17:09:05 crc kubenswrapper[4768]: E1124 17:09:05.560082 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-httpd" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.560088 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-httpd" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.560255 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-api" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.560274 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" containerName="neutron-httpd" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.560804 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.563844 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.565098 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.592570 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerID="0f2a40034662ac5a1c46ee72d62e06db4d556b4bf8a178f3af2b83d5a163eea6" exitCode=143 Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.594338 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9038e5e4-2985-4de6-b6d5-e16d170d38d8" path="/var/lib/kubelet/pods/9038e5e4-2985-4de6-b6d5-e16d170d38d8/volumes" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.594948 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-hk9hx"] Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.594974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerDied","Data":"0f2a40034662ac5a1c46ee72d62e06db4d556b4bf8a178f3af2b83d5a163eea6"} Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.733542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734136 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9gs\" (UniqueName: \"kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734259 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734286 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734317 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.734389 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.754316 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.761729 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.762468 4768 scope.go:117] "RemoveContainer" containerID="b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf" Nov 24 17:09:05 crc kubenswrapper[4768]: E1124 17:09:05.762825 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-cb4d89897-bnsh5_openstack(26b563bb-da9a-43fe-b201-9f77ed0d0ddd)\"" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" podUID="26b563bb-da9a-43fe-b201-9f77ed0d0ddd" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.835706 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.835816 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.835869 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.835954 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2kmw\" (UniqueName: \"kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836079 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836176 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom\") pod \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\" (UID: \"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74\") " Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836295 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836529 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9gs\" (UniqueName: \"kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836619 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs" (OuterVolumeSpecName: "logs") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836673 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836734 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836870 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.836887 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.837735 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.841941 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.842534 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.843035 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.843709 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.845840 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.852205 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.854725 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9gs\" (UniqueName: \"kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.856959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts\") pod \"ironic-inspector-db-sync-hk9hx\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.857985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw" (OuterVolumeSpecName: "kube-api-access-j2kmw") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "kube-api-access-j2kmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.858424 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts" (OuterVolumeSpecName: "scripts") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.876184 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data" (OuterVolumeSpecName: "config-data") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.886770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.901062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" (UID: "e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938793 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938851 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938863 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938877 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2kmw\" (UniqueName: \"kubernetes.io/projected/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-kube-api-access-j2kmw\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938910 4768 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-etc-podinfo\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:05 crc kubenswrapper[4768]: I1124 17:09:05.938939 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.392503 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-hk9hx"] Nov 24 17:09:06 crc kubenswrapper[4768]: W1124 17:09:06.409530 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd39158e2_1592_48f9_ba0e_198ab1030790.slice/crio-4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6 WatchSource:0}: Error finding container 4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6: Status 404 returned error can't find the container with id 4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6 Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.610319 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7fbb6d564d-76t79" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.610401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7fbb6d564d-76t79" event={"ID":"e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74","Type":"ContainerDied","Data":"eb82a4a38dbdb2812542c7814c8c830b6dfd5ebfa1f414ba33d14008fccaf6cb"} Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.610473 4768 scope.go:117] "RemoveContainer" containerID="575e23707546feec2150ac41ef50849f98531660227b5457c2440201a60da8c5" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.622739 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-hk9hx" event={"ID":"d39158e2-1592-48f9-ba0e-198ab1030790","Type":"ContainerStarted","Data":"4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6"} Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.643933 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.661516 4768 scope.go:117] "RemoveContainer" containerID="0f2a40034662ac5a1c46ee72d62e06db4d556b4bf8a178f3af2b83d5a163eea6" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.666474 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-7fbb6d564d-76t79"] Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.688566 4768 scope.go:117] "RemoveContainer" containerID="4ecca148c2d11b37a7385cb899769f1734736f4a513982d8a2a490123e1b165a" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.838979 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 17:09:06 crc kubenswrapper[4768]: I1124 17:09:06.999913 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 17:09:07 crc kubenswrapper[4768]: E1124 17:09:07.001170 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="init" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001198 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="init" Nov 24 17:09:07 crc kubenswrapper[4768]: E1124 17:09:07.001234 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001242 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: E1124 17:09:07.001256 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001264 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: E1124 17:09:07.001282 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api-log" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001290 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api-log" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001618 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api-log" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001661 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.001672 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" containerName="ironic-api" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.002805 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.006571 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.006583 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.006747 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-85cxj" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.007809 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.169483 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config-secret\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.169539 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98t8\" (UniqueName: \"kubernetes.io/projected/7541e37b-3221-4158-8d66-4682a77e8172-kube-api-access-p98t8\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.169614 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.169860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.272054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.272157 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.272215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config-secret\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.272249 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p98t8\" (UniqueName: \"kubernetes.io/projected/7541e37b-3221-4158-8d66-4682a77e8172-kube-api-access-p98t8\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.272902 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.283391 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.283843 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7541e37b-3221-4158-8d66-4682a77e8172-openstack-config-secret\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.300980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p98t8\" (UniqueName: \"kubernetes.io/projected/7541e37b-3221-4158-8d66-4682a77e8172-kube-api-access-p98t8\") pod \"openstackclient\" (UID: \"7541e37b-3221-4158-8d66-4682a77e8172\") " pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.327046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.590640 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74" path="/var/lib/kubelet/pods/e6f2fe6a-9fde-4b77-a1f8-337f12fb4e74/volumes" Nov 24 17:09:07 crc kubenswrapper[4768]: I1124 17:09:07.811435 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 17:09:08 crc kubenswrapper[4768]: I1124 17:09:08.649643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7541e37b-3221-4158-8d66-4682a77e8172","Type":"ContainerStarted","Data":"6a6aea1b13ce8824226ce4ceb30eca4a10892cb318fb1826398a19bb856bc219"} Nov 24 17:09:11 crc kubenswrapper[4768]: I1124 17:09:11.689050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-hk9hx" event={"ID":"d39158e2-1592-48f9-ba0e-198ab1030790","Type":"ContainerStarted","Data":"8da4c41076df556619067ade3f956212a197e5ff8705f0af8d2cf3317eabf04f"} Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.078525 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.099141 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-hk9hx" podStartSLOduration=2.752674909 podStartE2EDuration="7.099109977s" podCreationTimestamp="2025-11-24 17:09:05 +0000 UTC" firstStartedPulling="2025-11-24 17:09:06.420021241 +0000 UTC m=+1027.666989899" lastFinishedPulling="2025-11-24 17:09:10.766456309 +0000 UTC m=+1032.013424967" observedRunningTime="2025-11-24 17:09:11.70545723 +0000 UTC m=+1032.952425888" watchObservedRunningTime="2025-11-24 17:09:12.099109977 +0000 UTC m=+1033.346078655" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.457040 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-68997d6dc7-xqk74"] Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.458490 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.462746 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.462997 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.463704 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.473058 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68997d6dc7-xqk74"] Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487521 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-config-data\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487557 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-combined-ca-bundle\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cnjm\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-kube-api-access-7cnjm\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-etc-swift\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487751 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-internal-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-log-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487798 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-public-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.487813 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-run-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.589811 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-etc-swift\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590002 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-internal-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-log-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590089 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-public-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590109 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-run-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590165 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-config-data\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-combined-ca-bundle\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.590236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cnjm\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-kube-api-access-7cnjm\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.591039 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-run-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.591112 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6faf5c89-9071-4710-bf7a-91f8b276370b-log-httpd\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.595984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-combined-ca-bundle\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.597391 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-config-data\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.599964 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-internal-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.604398 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-etc-swift\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.608986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cnjm\" (UniqueName: \"kubernetes.io/projected/6faf5c89-9071-4710-bf7a-91f8b276370b-kube-api-access-7cnjm\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.608997 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6faf5c89-9071-4710-bf7a-91f8b276370b-public-tls-certs\") pod \"swift-proxy-68997d6dc7-xqk74\" (UID: \"6faf5c89-9071-4710-bf7a-91f8b276370b\") " pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.703274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-hk9hx" event={"ID":"d39158e2-1592-48f9-ba0e-198ab1030790","Type":"ContainerDied","Data":"8da4c41076df556619067ade3f956212a197e5ff8705f0af8d2cf3317eabf04f"} Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.703383 4768 generic.go:334] "Generic (PLEG): container finished" podID="d39158e2-1592-48f9-ba0e-198ab1030790" containerID="8da4c41076df556619067ade3f956212a197e5ff8705f0af8d2cf3317eabf04f" exitCode=0 Nov 24 17:09:12 crc kubenswrapper[4768]: I1124 17:09:12.882682 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.404956 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68997d6dc7-xqk74"] Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.936281 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.936586 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-central-agent" containerID="cri-o://48461bd57a3203302f690386add4e274e127a3bfb0bd4a182439b97722537e75" gracePeriod=30 Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.937215 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="proxy-httpd" containerID="cri-o://3d874cb5feed233ae5cf3ba66bb471ca897ed04accc768645ef5610be2b4c2e1" gracePeriod=30 Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.937260 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="sg-core" containerID="cri-o://1cd7de69543d651a2a45bbd623bc728228339fc97dab37f77d476cd575ab7292" gracePeriod=30 Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.937293 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-notification-agent" containerID="cri-o://c82b8491d99697fa36d3ef29ba2cf87e11ffd59af9ab5d07406b3422a8efec2d" gracePeriod=30 Nov 24 17:09:13 crc kubenswrapper[4768]: I1124 17:09:13.943413 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.729659 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerID="3d874cb5feed233ae5cf3ba66bb471ca897ed04accc768645ef5610be2b4c2e1" exitCode=0 Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.729935 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerID="1cd7de69543d651a2a45bbd623bc728228339fc97dab37f77d476cd575ab7292" exitCode=2 Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.729943 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerID="48461bd57a3203302f690386add4e274e127a3bfb0bd4a182439b97722537e75" exitCode=0 Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.729962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerDied","Data":"3d874cb5feed233ae5cf3ba66bb471ca897ed04accc768645ef5610be2b4c2e1"} Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.730001 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerDied","Data":"1cd7de69543d651a2a45bbd623bc728228339fc97dab37f77d476cd575ab7292"} Nov 24 17:09:14 crc kubenswrapper[4768]: I1124 17:09:14.730013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerDied","Data":"48461bd57a3203302f690386add4e274e127a3bfb0bd4a182439b97722537e75"} Nov 24 17:09:17 crc kubenswrapper[4768]: I1124 17:09:17.516149 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": dial tcp 10.217.0.162:3000: connect: connection refused" Nov 24 17:09:18 crc kubenswrapper[4768]: I1124 17:09:18.581329 4768 scope.go:117] "RemoveContainer" containerID="b0d471d18f44db3856de6f9f8ca0d8ce54e3bb9cfd8718a90f63b7e19aa51aaf" Nov 24 17:09:18 crc kubenswrapper[4768]: I1124 17:09:18.791039 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerID="c82b8491d99697fa36d3ef29ba2cf87e11ffd59af9ab5d07406b3422a8efec2d" exitCode=0 Nov 24 17:09:18 crc kubenswrapper[4768]: I1124 17:09:18.791085 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerDied","Data":"c82b8491d99697fa36d3ef29ba2cf87e11ffd59af9ab5d07406b3422a8efec2d"} Nov 24 17:09:18 crc kubenswrapper[4768]: I1124 17:09:18.928806 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:18 crc kubenswrapper[4768]: I1124 17:09:18.929556 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerName="kube-state-metrics" containerID="cri-o://b1dc7f04820c054aad088905b8e3e3062769cd9d95fe57725b98a4a20c3388ac" gracePeriod=30 Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.215632 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-68lhn"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.216754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.226292 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-68lhn"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.245902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd9g2\" (UniqueName: \"kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.245962 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.308915 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zzwz5"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.309960 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.318768 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zzwz5"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.347334 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.347543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcb9v\" (UniqueName: \"kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.347806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd9g2\" (UniqueName: \"kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.347991 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.348829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.370669 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd9g2\" (UniqueName: \"kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2\") pod \"nova-api-db-create-68lhn\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.415395 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9620-account-create-2k958"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.416435 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.418821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.436421 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9620-account-create-2k958"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.449974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6hz\" (UniqueName: \"kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.450030 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.450072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcb9v\" (UniqueName: \"kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.450097 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.450700 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.483038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcb9v\" (UniqueName: \"kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v\") pod \"nova-cell0-db-create-zzwz5\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.513718 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z8lj9"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.514819 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.535904 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.554381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj4kx\" (UniqueName: \"kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.554588 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6hz\" (UniqueName: \"kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.554747 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.555321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.555376 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.556273 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z8lj9"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.570213 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6hz\" (UniqueName: \"kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz\") pod \"nova-api-9620-account-create-2k958\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.626225 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.628250 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5871-account-create-rxhx5"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.629644 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.631883 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.637124 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5871-account-create-rxhx5"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.656530 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.656655 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj4kx\" (UniqueName: \"kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.659155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.673852 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj4kx\" (UniqueName: \"kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx\") pod \"nova-cell1-db-create-z8lj9\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.738428 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.758292 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.758506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfspj\" (UniqueName: \"kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.803630 4768 generic.go:334] "Generic (PLEG): container finished" podID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerID="b1dc7f04820c054aad088905b8e3e3062769cd9d95fe57725b98a4a20c3388ac" exitCode=2 Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.803695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"866ab349-cb74-4f16-9927-87eb7f5af5b8","Type":"ContainerDied","Data":"b1dc7f04820c054aad088905b8e3e3062769cd9d95fe57725b98a4a20c3388ac"} Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.824277 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9aef-account-create-mjnp6"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.825951 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.828462 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.833579 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.838159 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9aef-account-create-mjnp6"] Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.860552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfspj\" (UniqueName: \"kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.860619 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnh9z\" (UniqueName: \"kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.860655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.860707 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.861308 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.915082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfspj\" (UniqueName: \"kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj\") pod \"nova-cell0-5871-account-create-rxhx5\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.960159 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.961709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnh9z\" (UniqueName: \"kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.961756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.962369 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:19 crc kubenswrapper[4768]: I1124 17:09:19.976403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnh9z\" (UniqueName: \"kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z\") pod \"nova-cell1-9aef-account-create-mjnp6\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:20 crc kubenswrapper[4768]: I1124 17:09:20.147748 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:22 crc kubenswrapper[4768]: I1124 17:09:22.667334 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Nov 24 17:09:22 crc kubenswrapper[4768]: W1124 17:09:22.770020 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6faf5c89_9071_4710_bf7a_91f8b276370b.slice/crio-050d53b3fef0779550625e3054dccabc98bd2cf37352550da38865d1352338a7 WatchSource:0}: Error finding container 050d53b3fef0779550625e3054dccabc98bd2cf37352550da38865d1352338a7: Status 404 returned error can't find the container with id 050d53b3fef0779550625e3054dccabc98bd2cf37352550da38865d1352338a7 Nov 24 17:09:22 crc kubenswrapper[4768]: I1124 17:09:22.857058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-hk9hx" event={"ID":"d39158e2-1592-48f9-ba0e-198ab1030790","Type":"ContainerDied","Data":"4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6"} Nov 24 17:09:22 crc kubenswrapper[4768]: I1124 17:09:22.857117 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d840f25618acb6dead0f2db437153cf749e7d8b5c6bb66d656c919a70ac7ba6" Nov 24 17:09:22 crc kubenswrapper[4768]: I1124 17:09:22.870043 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68997d6dc7-xqk74" event={"ID":"6faf5c89-9071-4710-bf7a-91f8b276370b","Type":"ContainerStarted","Data":"050d53b3fef0779550625e3054dccabc98bd2cf37352550da38865d1352338a7"} Nov 24 17:09:22 crc kubenswrapper[4768]: I1124 17:09:22.973864 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037056 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037128 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037156 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw9gs\" (UniqueName: \"kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037243 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037299 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037340 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037451 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config\") pod \"d39158e2-1592-48f9-ba0e-198ab1030790\" (UID: \"d39158e2-1592-48f9-ba0e-198ab1030790\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037667 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.037886 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.044242 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs" (OuterVolumeSpecName: "kube-api-access-pw9gs") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "kube-api-access-pw9gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.045826 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.054050 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts" (OuterVolumeSpecName: "scripts") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.078994 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.079926 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config" (OuterVolumeSpecName: "config") pod "d39158e2-1592-48f9-ba0e-198ab1030790" (UID: "d39158e2-1592-48f9-ba0e-198ab1030790"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.139842 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.140277 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d39158e2-1592-48f9-ba0e-198ab1030790-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.140356 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw9gs\" (UniqueName: \"kubernetes.io/projected/d39158e2-1592-48f9-ba0e-198ab1030790-kube-api-access-pw9gs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.140420 4768 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d39158e2-1592-48f9-ba0e-198ab1030790-etc-podinfo\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.140472 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.140522 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39158e2-1592-48f9-ba0e-198ab1030790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.260746 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.346145 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5r2b\" (UniqueName: \"kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b\") pod \"866ab349-cb74-4f16-9927-87eb7f5af5b8\" (UID: \"866ab349-cb74-4f16-9927-87eb7f5af5b8\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.352126 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b" (OuterVolumeSpecName: "kube-api-access-g5r2b") pod "866ab349-cb74-4f16-9927-87eb7f5af5b8" (UID: "866ab349-cb74-4f16-9927-87eb7f5af5b8"). InnerVolumeSpecName "kube-api-access-g5r2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.357112 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449019 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449091 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449258 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cvwc\" (UniqueName: \"kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449307 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449407 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts\") pod \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\" (UID: \"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c\") " Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449776 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5r2b\" (UniqueName: \"kubernetes.io/projected/866ab349-cb74-4f16-9927-87eb7f5af5b8-kube-api-access-g5r2b\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.449791 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.450002 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.450790 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zzwz5"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.456903 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc" (OuterVolumeSpecName: "kube-api-access-8cvwc") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "kube-api-access-8cvwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.469064 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts" (OuterVolumeSpecName: "scripts") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.484031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.552830 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.552857 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cvwc\" (UniqueName: \"kubernetes.io/projected/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-kube-api-access-8cvwc\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.552866 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.552875 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.601173 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data" (OuterVolumeSpecName: "config-data") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.662484 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.669210 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" (UID: "6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.715021 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-68lhn"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.721593 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9aef-account-create-mjnp6"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.730878 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9620-account-create-2k958"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.767848 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.822599 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5871-account-create-rxhx5"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.901510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7541e37b-3221-4158-8d66-4682a77e8172","Type":"ContainerStarted","Data":"39cb8d5db9cc34ba93751b27809aaf97530a431ebe2c6b2a09587bbbcec4b428"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.905643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9aef-account-create-mjnp6" event={"ID":"bfd60bb7-f834-43fd-9758-842ebcf0fc3b","Type":"ContainerStarted","Data":"ac7f2c369d506a848dd5040c030ca12dc7648a10a2356cf149489b4fef721659"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.915988 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c","Type":"ContainerDied","Data":"11c3770a298fbd38a42b575eb073a260a11e15235b6b4a94b72fa8d8dc0f2a9b"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.916059 4768 scope.go:117] "RemoveContainer" containerID="3d874cb5feed233ae5cf3ba66bb471ca897ed04accc768645ef5610be2b4c2e1" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.916216 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.920939 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.77329348 podStartE2EDuration="17.920923776s" podCreationTimestamp="2025-11-24 17:09:06 +0000 UTC" firstStartedPulling="2025-11-24 17:09:07.82024375 +0000 UTC m=+1029.067212408" lastFinishedPulling="2025-11-24 17:09:22.967874046 +0000 UTC m=+1044.214842704" observedRunningTime="2025-11-24 17:09:23.92003192 +0000 UTC m=+1045.167000568" watchObservedRunningTime="2025-11-24 17:09:23.920923776 +0000 UTC m=+1045.167892434" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.956128 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" event={"ID":"26b563bb-da9a-43fe-b201-9f77ed0d0ddd","Type":"ContainerStarted","Data":"c275b2db9500cf5bd56a67fa6a184feb405d6cb50c9465dc41d7ea5f34459fa1"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.956677 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.966477 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"866ab349-cb74-4f16-9927-87eb7f5af5b8","Type":"ContainerDied","Data":"ad26194ec6f9c128d7f19d4a27ea14aba49f7b19acaf9ff40fc0cc30bd0c78bf"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.969298 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.974325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z8lj9"] Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.976616 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zzwz5" event={"ID":"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8","Type":"ContainerStarted","Data":"35936d586df799b245c9788dea50273a5ab2b119148280f5e12f3f040c50ee8c"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.976682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zzwz5" event={"ID":"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8","Type":"ContainerStarted","Data":"747a341b143293f4df95ac6926717a5813eddc0a0a83c39c8ee4902a8d53c955"} Nov 24 17:09:23 crc kubenswrapper[4768]: I1124 17:09:23.992615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-68lhn" event={"ID":"83837a2f-936f-4af4-b223-b3e109491af4","Type":"ContainerStarted","Data":"d41cf4053de3c9cd6234834e63d75b4f6757e9f64918b4e10cb73df428e2a234"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.000330 4768 scope.go:117] "RemoveContainer" containerID="1cd7de69543d651a2a45bbd623bc728228339fc97dab37f77d476cd575ab7292" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.030274 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5871-account-create-rxhx5" event={"ID":"87c522f8-072d-496b-936e-0a692e3c1149","Type":"ContainerStarted","Data":"c2f239ac55af679aa61a962dc3d0e20c007f73de2afe47b458812464d96a7d00"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.035734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"02f0ac1e47fefcbd9f508eab0ca672d9b9bb160eb41cd1df089b5c5e5b902ef6"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.039055 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zzwz5" podStartSLOduration=5.039018014 podStartE2EDuration="5.039018014s" podCreationTimestamp="2025-11-24 17:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:09:24.025008028 +0000 UTC m=+1045.271976686" watchObservedRunningTime="2025-11-24 17:09:24.039018014 +0000 UTC m=+1045.285986672" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.039230 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68997d6dc7-xqk74" event={"ID":"6faf5c89-9071-4710-bf7a-91f8b276370b","Type":"ContainerStarted","Data":"5920fe74cb88f8e466087af76c3e2dd5a495cc8322b4a18447e5be22dd7ed4dd"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.039271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68997d6dc7-xqk74" event={"ID":"6faf5c89-9071-4710-bf7a-91f8b276370b","Type":"ContainerStarted","Data":"06e20aa87f338c35c024f8da106f38e59c47f13abc2fb86ea69b8c0ae7ef65a4"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.039886 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.040114 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.043365 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-hk9hx" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.043401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9620-account-create-2k958" event={"ID":"17fbe0e3-4301-4a77-b1a1-ef966b69f21b","Type":"ContainerStarted","Data":"c44ae4ee0181ec469a53b6cbe76794f6c63d41926c8524ed62196f9012e4acb6"} Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.067968 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.083120 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.092515 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.095604 4768 scope.go:117] "RemoveContainer" containerID="c82b8491d99697fa36d3ef29ba2cf87e11ffd59af9ab5d07406b3422a8efec2d" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107313 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107750 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39158e2-1592-48f9-ba0e-198ab1030790" containerName="ironic-inspector-db-sync" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107770 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39158e2-1592-48f9-ba0e-198ab1030790" containerName="ironic-inspector-db-sync" Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107782 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="sg-core" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107788 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="sg-core" Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107808 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-central-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107815 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-central-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107829 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerName="kube-state-metrics" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107844 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerName="kube-state-metrics" Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107859 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="proxy-httpd" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107865 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="proxy-httpd" Nov 24 17:09:24 crc kubenswrapper[4768]: E1124 17:09:24.107875 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-notification-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.107881 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-notification-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108051 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-central-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108065 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="proxy-httpd" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108077 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39158e2-1592-48f9-ba0e-198ab1030790" containerName="ironic-inspector-db-sync" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108093 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="ceilometer-notification-agent" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108104 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" containerName="sg-core" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108115 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" containerName="kube-state-metrics" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.108788 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.112010 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-w2rz7" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.112207 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.112323 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.127897 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.143859 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.169524 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.172430 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.174940 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.175235 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.175524 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.175546 4768 scope.go:117] "RemoveContainer" containerID="48461bd57a3203302f690386add4e274e127a3bfb0bd4a182439b97722537e75" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.195454 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.210838 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-68997d6dc7-xqk74" podStartSLOduration=12.21080838 podStartE2EDuration="12.21080838s" podCreationTimestamp="2025-11-24 17:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:09:24.143271481 +0000 UTC m=+1045.390240159" watchObservedRunningTime="2025-11-24 17:09:24.21080838 +0000 UTC m=+1045.457777038" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.212328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.212577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ssk\" (UniqueName: \"kubernetes.io/projected/be754be8-e18d-4413-bf31-5258e9ad4544-kube-api-access-l2ssk\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.212603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.212632 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.214682 4768 scope.go:117] "RemoveContainer" containerID="b1dc7f04820c054aad088905b8e3e3062769cd9d95fe57725b98a4a20c3388ac" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315654 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315717 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315745 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6dmc\" (UniqueName: \"kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ssk\" (UniqueName: \"kubernetes.io/projected/be754be8-e18d-4413-bf31-5258e9ad4544-kube-api-access-l2ssk\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315872 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315906 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.315922 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.332462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.335553 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.336304 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/be754be8-e18d-4413-bf31-5258e9ad4544-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.338566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ssk\" (UniqueName: \"kubernetes.io/projected/be754be8-e18d-4413-bf31-5258e9ad4544-kube-api-access-l2ssk\") pod \"kube-state-metrics-0\" (UID: \"be754be8-e18d-4413-bf31-5258e9ad4544\") " pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417582 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6dmc\" (UniqueName: \"kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417767 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.417883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.418050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.418487 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.422482 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.424212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.426003 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.426666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.427982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.437414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6dmc\" (UniqueName: \"kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc\") pod \"ceilometer-0\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.496551 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.542565 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.932400 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: I1124 17:09:24.987484 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 17:09:24 crc kubenswrapper[4768]: W1124 17:09:24.988735 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe754be8_e18d_4413_bf31_5258e9ad4544.slice/crio-4155bb95ef98fabe33795f80a24f5fc98497573dd3657f8a8032ecc6c28a0523 WatchSource:0}: Error finding container 4155bb95ef98fabe33795f80a24f5fc98497573dd3657f8a8032ecc6c28a0523: Status 404 returned error can't find the container with id 4155bb95ef98fabe33795f80a24f5fc98497573dd3657f8a8032ecc6c28a0523 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.052794 4768 generic.go:334] "Generic (PLEG): container finished" podID="83837a2f-936f-4af4-b223-b3e109491af4" containerID="9d0d72f00e8abcd148485c1cfa0e2496dc121d3e39fe5edd96e92820ed489184" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.052886 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-68lhn" event={"ID":"83837a2f-936f-4af4-b223-b3e109491af4","Type":"ContainerDied","Data":"9d0d72f00e8abcd148485c1cfa0e2496dc121d3e39fe5edd96e92820ed489184"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.054713 4768 generic.go:334] "Generic (PLEG): container finished" podID="bfd60bb7-f834-43fd-9758-842ebcf0fc3b" containerID="eae7a35038c5ac7ad834830f1ee390a0ccd282fdebcb761475a406e7153fffc5" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.054787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9aef-account-create-mjnp6" event={"ID":"bfd60bb7-f834-43fd-9758-842ebcf0fc3b","Type":"ContainerDied","Data":"eae7a35038c5ac7ad834830f1ee390a0ccd282fdebcb761475a406e7153fffc5"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.057006 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"be754be8-e18d-4413-bf31-5258e9ad4544","Type":"ContainerStarted","Data":"4155bb95ef98fabe33795f80a24f5fc98497573dd3657f8a8032ecc6c28a0523"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.058796 4768 generic.go:334] "Generic (PLEG): container finished" podID="17fbe0e3-4301-4a77-b1a1-ef966b69f21b" containerID="8a50858fd1016bbb3d1597ddc8aa339facd09512dd17c081bf1c6a43daf7f13c" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.058840 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9620-account-create-2k958" event={"ID":"17fbe0e3-4301-4a77-b1a1-ef966b69f21b","Type":"ContainerDied","Data":"8a50858fd1016bbb3d1597ddc8aa339facd09512dd17c081bf1c6a43daf7f13c"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.060182 4768 generic.go:334] "Generic (PLEG): container finished" podID="87c522f8-072d-496b-936e-0a692e3c1149" containerID="eee42d8c2288eb60caf89af5348b3c4be5bf946dfd307099c59485aa5a431567" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.060253 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5871-account-create-rxhx5" event={"ID":"87c522f8-072d-496b-936e-0a692e3c1149","Type":"ContainerDied","Data":"eee42d8c2288eb60caf89af5348b3c4be5bf946dfd307099c59485aa5a431567"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.061589 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" containerID="35936d586df799b245c9788dea50273a5ab2b119148280f5e12f3f040c50ee8c" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.061724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zzwz5" event={"ID":"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8","Type":"ContainerDied","Data":"35936d586df799b245c9788dea50273a5ab2b119148280f5e12f3f040c50ee8c"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.066689 4768 generic.go:334] "Generic (PLEG): container finished" podID="67d150e1-af0e-45d5-b366-e9e550d7457a" containerID="dfa5ef62f68844084fb9eaae8dc0f4ff331883249b3a6f39aabfc4d2645e6b44" exitCode=0 Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.066857 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8lj9" event={"ID":"67d150e1-af0e-45d5-b366-e9e550d7457a","Type":"ContainerDied","Data":"dfa5ef62f68844084fb9eaae8dc0f4ff331883249b3a6f39aabfc4d2645e6b44"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.066879 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8lj9" event={"ID":"67d150e1-af0e-45d5-b366-e9e550d7457a","Type":"ContainerStarted","Data":"9cec85540d044851f435996f9de62300199ddb8a12e6b9677fc85d29a2cd11c2"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.073979 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerStarted","Data":"a4743bdb0cba0eae8d446d0acd4673e97dc6d0eea53cf35efd0468c4b558b4bc"} Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.591705 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c" path="/var/lib/kubelet/pods/6e15cdbb-3aa0-43e4-8b2b-8f1bec9b1b3c/volumes" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.593125 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866ab349-cb74-4f16-9927-87eb7f5af5b8" path="/var/lib/kubelet/pods/866ab349-cb74-4f16-9927-87eb7f5af5b8/volumes" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.825490 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.833569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.837232 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.837411 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.840708 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982454 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzxx\" (UniqueName: \"kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982554 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982581 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:25 crc kubenswrapper[4768]: I1124 17:09:25.982679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086059 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086249 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086291 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.086414 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzxx\" (UniqueName: \"kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.088075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.088312 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.098037 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.099044 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.099194 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.113823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzxx\" (UniqueName: \"kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.118253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.169999 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.632462 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.695402 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnh9z\" (UniqueName: \"kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z\") pod \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.695613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts\") pod \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\" (UID: \"bfd60bb7-f834-43fd-9758-842ebcf0fc3b\") " Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.696849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfd60bb7-f834-43fd-9758-842ebcf0fc3b" (UID: "bfd60bb7-f834-43fd-9758-842ebcf0fc3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.712568 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z" (OuterVolumeSpecName: "kube-api-access-vnh9z") pod "bfd60bb7-f834-43fd-9758-842ebcf0fc3b" (UID: "bfd60bb7-f834-43fd-9758-842ebcf0fc3b"). InnerVolumeSpecName "kube-api-access-vnh9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.753987 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.797774 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.797801 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnh9z\" (UniqueName: \"kubernetes.io/projected/bfd60bb7-f834-43fd-9758-842ebcf0fc3b-kube-api-access-vnh9z\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.845678 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.869404 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.880032 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.898849 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfspj\" (UniqueName: \"kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj\") pod \"87c522f8-072d-496b-936e-0a692e3c1149\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.898967 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts\") pod \"87c522f8-072d-496b-936e-0a692e3c1149\" (UID: \"87c522f8-072d-496b-936e-0a692e3c1149\") " Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.899849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87c522f8-072d-496b-936e-0a692e3c1149" (UID: "87c522f8-072d-496b-936e-0a692e3c1149"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.903970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj" (OuterVolumeSpecName: "kube-api-access-nfspj") pod "87c522f8-072d-496b-936e-0a692e3c1149" (UID: "87c522f8-072d-496b-936e-0a692e3c1149"). InnerVolumeSpecName "kube-api-access-nfspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:26 crc kubenswrapper[4768]: I1124 17:09:26.974472 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000289 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts\") pod \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj4kx\" (UniqueName: \"kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx\") pod \"67d150e1-af0e-45d5-b366-e9e550d7457a\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd9g2\" (UniqueName: \"kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2\") pod \"83837a2f-936f-4af4-b223-b3e109491af4\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000644 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcb9v\" (UniqueName: \"kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v\") pod \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\" (UID: \"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000666 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts\") pod \"67d150e1-af0e-45d5-b366-e9e550d7457a\" (UID: \"67d150e1-af0e-45d5-b366-e9e550d7457a\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000700 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts\") pod \"83837a2f-936f-4af4-b223-b3e109491af4\" (UID: \"83837a2f-936f-4af4-b223-b3e109491af4\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.000809 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" (UID: "6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.001048 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87c522f8-072d-496b-936e-0a692e3c1149-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.001060 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfspj\" (UniqueName: \"kubernetes.io/projected/87c522f8-072d-496b-936e-0a692e3c1149-kube-api-access-nfspj\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.001071 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.001611 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83837a2f-936f-4af4-b223-b3e109491af4" (UID: "83837a2f-936f-4af4-b223-b3e109491af4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.002585 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67d150e1-af0e-45d5-b366-e9e550d7457a" (UID: "67d150e1-af0e-45d5-b366-e9e550d7457a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.005488 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx" (OuterVolumeSpecName: "kube-api-access-gj4kx") pod "67d150e1-af0e-45d5-b366-e9e550d7457a" (UID: "67d150e1-af0e-45d5-b366-e9e550d7457a"). InnerVolumeSpecName "kube-api-access-gj4kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.006127 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v" (OuterVolumeSpecName: "kube-api-access-jcb9v") pod "6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" (UID: "6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8"). InnerVolumeSpecName "kube-api-access-jcb9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.006657 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2" (OuterVolumeSpecName: "kube-api-access-rd9g2") pod "83837a2f-936f-4af4-b223-b3e109491af4" (UID: "83837a2f-936f-4af4-b223-b3e109491af4"). InnerVolumeSpecName "kube-api-access-rd9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.125692 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts\") pod \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.125841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6hz\" (UniqueName: \"kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz\") pod \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\" (UID: \"17fbe0e3-4301-4a77-b1a1-ef966b69f21b\") " Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126187 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd9g2\" (UniqueName: \"kubernetes.io/projected/83837a2f-936f-4af4-b223-b3e109491af4-kube-api-access-rd9g2\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126205 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcb9v\" (UniqueName: \"kubernetes.io/projected/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8-kube-api-access-jcb9v\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126214 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67d150e1-af0e-45d5-b366-e9e550d7457a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126223 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83837a2f-936f-4af4-b223-b3e109491af4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126234 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj4kx\" (UniqueName: \"kubernetes.io/projected/67d150e1-af0e-45d5-b366-e9e550d7457a-kube-api-access-gj4kx\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.126702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17fbe0e3-4301-4a77-b1a1-ef966b69f21b" (UID: "17fbe0e3-4301-4a77-b1a1-ef966b69f21b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.131154 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz" (OuterVolumeSpecName: "kube-api-access-gx6hz") pod "17fbe0e3-4301-4a77-b1a1-ef966b69f21b" (UID: "17fbe0e3-4301-4a77-b1a1-ef966b69f21b"). InnerVolumeSpecName "kube-api-access-gx6hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.134580 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9620-account-create-2k958" event={"ID":"17fbe0e3-4301-4a77-b1a1-ef966b69f21b","Type":"ContainerDied","Data":"c44ae4ee0181ec469a53b6cbe76794f6c63d41926c8524ed62196f9012e4acb6"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.134616 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c44ae4ee0181ec469a53b6cbe76794f6c63d41926c8524ed62196f9012e4acb6" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.134668 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9620-account-create-2k958" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.140179 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-68lhn" event={"ID":"83837a2f-936f-4af4-b223-b3e109491af4","Type":"ContainerDied","Data":"d41cf4053de3c9cd6234834e63d75b4f6757e9f64918b4e10cb73df428e2a234"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.140230 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d41cf4053de3c9cd6234834e63d75b4f6757e9f64918b4e10cb73df428e2a234" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.140302 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-68lhn" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.143637 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5871-account-create-rxhx5" event={"ID":"87c522f8-072d-496b-936e-0a692e3c1149","Type":"ContainerDied","Data":"c2f239ac55af679aa61a962dc3d0e20c007f73de2afe47b458812464d96a7d00"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.143674 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f239ac55af679aa61a962dc3d0e20c007f73de2afe47b458812464d96a7d00" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.143745 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5871-account-create-rxhx5" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.153851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zzwz5" event={"ID":"6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8","Type":"ContainerDied","Data":"747a341b143293f4df95ac6926717a5813eddc0a0a83c39c8ee4902a8d53c955"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.153887 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747a341b143293f4df95ac6926717a5813eddc0a0a83c39c8ee4902a8d53c955" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.153935 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zzwz5" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.155990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8lj9" event={"ID":"67d150e1-af0e-45d5-b366-e9e550d7457a","Type":"ContainerDied","Data":"9cec85540d044851f435996f9de62300199ddb8a12e6b9677fc85d29a2cd11c2"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.156013 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cec85540d044851f435996f9de62300199ddb8a12e6b9677fc85d29a2cd11c2" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.156049 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8lj9" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.157568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9aef-account-create-mjnp6" event={"ID":"bfd60bb7-f834-43fd-9758-842ebcf0fc3b","Type":"ContainerDied","Data":"ac7f2c369d506a848dd5040c030ca12dc7648a10a2356cf149489b4fef721659"} Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.157608 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7f2c369d506a848dd5040c030ca12dc7648a10a2356cf149489b4fef721659" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.157642 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9aef-account-create-mjnp6" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.230816 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.230847 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6hz\" (UniqueName: \"kubernetes.io/projected/17fbe0e3-4301-4a77-b1a1-ef966b69f21b-kube-api-access-gx6hz\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:27 crc kubenswrapper[4768]: I1124 17:09:27.317212 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:27 crc kubenswrapper[4768]: W1124 17:09:27.318686 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52101039_0b1c_4531_8076_50dc3e77ae68.slice/crio-abb1ce0abed275391c953e6180a3c41c50113568de0845913f9eb8a631e3decf WatchSource:0}: Error finding container abb1ce0abed275391c953e6180a3c41c50113568de0845913f9eb8a631e3decf: Status 404 returned error can't find the container with id abb1ce0abed275391c953e6180a3c41c50113568de0845913f9eb8a631e3decf Nov 24 17:09:28 crc kubenswrapper[4768]: I1124 17:09:28.169059 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"52101039-0b1c-4531-8076-50dc3e77ae68","Type":"ContainerStarted","Data":"abb1ce0abed275391c953e6180a3c41c50113568de0845913f9eb8a631e3decf"} Nov 24 17:09:28 crc kubenswrapper[4768]: I1124 17:09:28.584298 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.155417 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mx2x"] Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.156882 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd60bb7-f834-43fd-9758-842ebcf0fc3b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.156969 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd60bb7-f834-43fd-9758-842ebcf0fc3b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.157026 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67d150e1-af0e-45d5-b366-e9e550d7457a" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157083 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="67d150e1-af0e-45d5-b366-e9e550d7457a" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.157150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83837a2f-936f-4af4-b223-b3e109491af4" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157200 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="83837a2f-936f-4af4-b223-b3e109491af4" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.157254 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fbe0e3-4301-4a77-b1a1-ef966b69f21b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157307 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fbe0e3-4301-4a77-b1a1-ef966b69f21b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.157392 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c522f8-072d-496b-936e-0a692e3c1149" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157453 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c522f8-072d-496b-936e-0a692e3c1149" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: E1124 17:09:30.157519 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157578 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.157815 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd60bb7-f834-43fd-9758-842ebcf0fc3b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.159033 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="67d150e1-af0e-45d5-b366-e9e550d7457a" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.159181 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.159250 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fbe0e3-4301-4a77-b1a1-ef966b69f21b" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.159321 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="83837a2f-936f-4af4-b223-b3e109491af4" containerName="mariadb-database-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.159435 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c522f8-072d-496b-936e-0a692e3c1149" containerName="mariadb-account-create" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.160189 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.162108 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.162294 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7k9dw" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.162927 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.257826 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mx2x"] Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.294095 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.294165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.294200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.294341 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n55pb\" (UniqueName: \"kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.395751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n55pb\" (UniqueName: \"kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.395829 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.395877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.395898 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.401708 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.401875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.403758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.410285 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n55pb\" (UniqueName: \"kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb\") pod \"nova-cell0-conductor-db-sync-6mx2x\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.483594 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:09:30 crc kubenswrapper[4768]: I1124 17:09:30.878597 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-cb4d89897-bnsh5" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.193858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"be754be8-e18d-4413-bf31-5258e9ad4544","Type":"ContainerStarted","Data":"ab0034686029c38b70c68ff359eb34d1f366093fda51a85d3fc905bcd25a894c"} Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.194275 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.195391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerStarted","Data":"714c17223da5d712a59a590614feae91dc19139a848aa2d6a0ec2b9c27776d0f"} Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.197290 4768 generic.go:334] "Generic (PLEG): container finished" podID="52101039-0b1c-4531-8076-50dc3e77ae68" containerID="b40c0238190db7d1455c25b573f0bef2c05624d4d886a052aa3095e6f022b0a7" exitCode=0 Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.197377 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"52101039-0b1c-4531-8076-50dc3e77ae68","Type":"ContainerDied","Data":"b40c0238190db7d1455c25b573f0bef2c05624d4d886a052aa3095e6f022b0a7"} Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.199431 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd" containerID="02f0ac1e47fefcbd9f508eab0ca672d9b9bb160eb41cd1df089b5c5e5b902ef6" exitCode=0 Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.199473 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerDied","Data":"02f0ac1e47fefcbd9f508eab0ca672d9b9bb160eb41cd1df089b5c5e5b902ef6"} Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.222500 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.4758960850000005 podStartE2EDuration="7.222476233s" podCreationTimestamp="2025-11-24 17:09:24 +0000 UTC" firstStartedPulling="2025-11-24 17:09:24.991412735 +0000 UTC m=+1046.238381393" lastFinishedPulling="2025-11-24 17:09:26.737992883 +0000 UTC m=+1047.984961541" observedRunningTime="2025-11-24 17:09:31.212133451 +0000 UTC m=+1052.459102109" watchObservedRunningTime="2025-11-24 17:09:31.222476233 +0000 UTC m=+1052.469444891" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.274261 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mx2x"] Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.592667 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.719669 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.719726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.719808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzzxx\" (UniqueName: \"kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.719831 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.719874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.720282 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.720555 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.720578 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts\") pod \"52101039-0b1c-4531-8076-50dc3e77ae68\" (UID: \"52101039-0b1c-4531-8076-50dc3e77ae68\") " Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.720963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.721483 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.721502 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/52101039-0b1c-4531-8076-50dc3e77ae68-var-lib-ironic\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.725711 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.748201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config" (OuterVolumeSpecName: "config") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.748237 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts" (OuterVolumeSpecName: "scripts") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.748715 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx" (OuterVolumeSpecName: "kube-api-access-bzzxx") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "kube-api-access-bzzxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.752958 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52101039-0b1c-4531-8076-50dc3e77ae68" (UID: "52101039-0b1c-4531-8076-50dc3e77ae68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.822763 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.823100 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzzxx\" (UniqueName: \"kubernetes.io/projected/52101039-0b1c-4531-8076-50dc3e77ae68-kube-api-access-bzzxx\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.823111 4768 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/52101039-0b1c-4531-8076-50dc3e77ae68-etc-podinfo\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.823120 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:31 crc kubenswrapper[4768]: I1124 17:09:31.823129 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52101039-0b1c-4531-8076-50dc3e77ae68-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.212246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerStarted","Data":"5d47b729609f8c5190eebb4af66899be4d4f2f2da1a69ed74dcc7de99981a9a8"} Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.220973 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"52101039-0b1c-4531-8076-50dc3e77ae68","Type":"ContainerDied","Data":"abb1ce0abed275391c953e6180a3c41c50113568de0845913f9eb8a631e3decf"} Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.221036 4768 scope.go:117] "RemoveContainer" containerID="b40c0238190db7d1455c25b573f0bef2c05624d4d886a052aa3095e6f022b0a7" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.221088 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.223000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" event={"ID":"63ae2678-f257-4fe9-b15c-72c7171320ad","Type":"ContainerStarted","Data":"a7c5f200a31c26673767845e6674a4eda98c41db6ed9cdc8551d34b5f0addfc7"} Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.338979 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.348265 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.380261 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:32 crc kubenswrapper[4768]: E1124 17:09:32.380660 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52101039-0b1c-4531-8076-50dc3e77ae68" containerName="ironic-python-agent-init" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.380677 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="52101039-0b1c-4531-8076-50dc3e77ae68" containerName="ironic-python-agent-init" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.380904 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="52101039-0b1c-4531-8076-50dc3e77ae68" containerName="ironic-python-agent-init" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.388287 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.391157 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.391392 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.391580 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.394529 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.402508 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436389 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-config\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436499 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4150a56f-5273-4601-8abd-53554fee9e46-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436519 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8th\" (UniqueName: \"kubernetes.io/projected/4150a56f-5273-4601-8abd-53554fee9e46-kube-api-access-fn8th\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436625 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.436645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-scripts\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.538831 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-config\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539093 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539131 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539159 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4150a56f-5273-4601-8abd-53554fee9e46-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn8th\" (UniqueName: \"kubernetes.io/projected/4150a56f-5273-4601-8abd-53554fee9e46-kube-api-access-fn8th\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539277 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539296 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-scripts\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.539775 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.540698 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/4150a56f-5273-4601-8abd-53554fee9e46-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.546682 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.547280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.547673 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.549322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-scripts\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.557969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4150a56f-5273-4601-8abd-53554fee9e46-config\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.560547 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn8th\" (UniqueName: \"kubernetes.io/projected/4150a56f-5273-4601-8abd-53554fee9e46-kube-api-access-fn8th\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.568981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/4150a56f-5273-4601-8abd-53554fee9e46-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"4150a56f-5273-4601-8abd-53554fee9e46\") " pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.731953 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.893794 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:32 crc kubenswrapper[4768]: I1124 17:09:32.895023 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68997d6dc7-xqk74" Nov 24 17:09:33 crc kubenswrapper[4768]: I1124 17:09:33.221751 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Nov 24 17:09:33 crc kubenswrapper[4768]: I1124 17:09:33.240411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"73e95eb9a923989f795051f5eb96b4205f11334d0c2782cb28225bec82d3c3ed"} Nov 24 17:09:33 crc kubenswrapper[4768]: I1124 17:09:33.249843 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerStarted","Data":"d4a75414ec1fc1ad510b117a5f8f40b6c9a7bd1da033b2dae79b80379e047484"} Nov 24 17:09:33 crc kubenswrapper[4768]: I1124 17:09:33.591098 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52101039-0b1c-4531-8076-50dc3e77ae68" path="/var/lib/kubelet/pods/52101039-0b1c-4531-8076-50dc3e77ae68/volumes" Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.266984 4768 generic.go:334] "Generic (PLEG): container finished" podID="4150a56f-5273-4601-8abd-53554fee9e46" containerID="3676ce97aaf01b2bc3c5cf0b9445039cd32c0f9f4a80a9088fbd57039d2cc327" exitCode=0 Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.267271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerDied","Data":"3676ce97aaf01b2bc3c5cf0b9445039cd32c0f9f4a80a9088fbd57039d2cc327"} Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.892695 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.892750 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.892793 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.893439 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:09:34 crc kubenswrapper[4768]: I1124 17:09:34.893489 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7" gracePeriod=600 Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.111844 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.282402 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7" exitCode=0 Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.282480 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7"} Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.282826 4768 scope.go:117] "RemoveContainer" containerID="95ef721fc07cbc17b0f7e83371486f8b9c131887d050be1100a4afc5d9e98d85" Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.288261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerStarted","Data":"e9cdcfd2d165b6357ea34b8fefee3246918e1fd41031d6e4c894903be5a273e8"} Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.288465 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:09:35 crc kubenswrapper[4768]: I1124 17:09:35.314708 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1404748749999998 podStartE2EDuration="11.314692525s" podCreationTimestamp="2025-11-24 17:09:24 +0000 UTC" firstStartedPulling="2025-11-24 17:09:24.944601091 +0000 UTC m=+1046.191569749" lastFinishedPulling="2025-11-24 17:09:34.118818721 +0000 UTC m=+1055.365787399" observedRunningTime="2025-11-24 17:09:35.311744431 +0000 UTC m=+1056.558713109" watchObservedRunningTime="2025-11-24 17:09:35.314692525 +0000 UTC m=+1056.561661183" Nov 24 17:09:36 crc kubenswrapper[4768]: I1124 17:09:36.296828 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-central-agent" containerID="cri-o://714c17223da5d712a59a590614feae91dc19139a848aa2d6a0ec2b9c27776d0f" gracePeriod=30 Nov 24 17:09:36 crc kubenswrapper[4768]: I1124 17:09:36.296910 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="proxy-httpd" containerID="cri-o://e9cdcfd2d165b6357ea34b8fefee3246918e1fd41031d6e4c894903be5a273e8" gracePeriod=30 Nov 24 17:09:36 crc kubenswrapper[4768]: I1124 17:09:36.296955 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-notification-agent" containerID="cri-o://5d47b729609f8c5190eebb4af66899be4d4f2f2da1a69ed74dcc7de99981a9a8" gracePeriod=30 Nov 24 17:09:36 crc kubenswrapper[4768]: I1124 17:09:36.296911 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="sg-core" containerID="cri-o://d4a75414ec1fc1ad510b117a5f8f40b6c9a7bd1da033b2dae79b80379e047484" gracePeriod=30 Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308460 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerID="e9cdcfd2d165b6357ea34b8fefee3246918e1fd41031d6e4c894903be5a273e8" exitCode=0 Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308707 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerID="d4a75414ec1fc1ad510b117a5f8f40b6c9a7bd1da033b2dae79b80379e047484" exitCode=2 Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308715 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerID="5d47b729609f8c5190eebb4af66899be4d4f2f2da1a69ed74dcc7de99981a9a8" exitCode=0 Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308543 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerDied","Data":"e9cdcfd2d165b6357ea34b8fefee3246918e1fd41031d6e4c894903be5a273e8"} Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerDied","Data":"d4a75414ec1fc1ad510b117a5f8f40b6c9a7bd1da033b2dae79b80379e047484"} Nov 24 17:09:37 crc kubenswrapper[4768]: I1124 17:09:37.308768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerDied","Data":"5d47b729609f8c5190eebb4af66899be4d4f2f2da1a69ed74dcc7de99981a9a8"} Nov 24 17:09:40 crc kubenswrapper[4768]: I1124 17:09:40.343028 4768 generic.go:334] "Generic (PLEG): container finished" podID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerID="714c17223da5d712a59a590614feae91dc19139a848aa2d6a0ec2b9c27776d0f" exitCode=0 Nov 24 17:09:40 crc kubenswrapper[4768]: I1124 17:09:40.343130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerDied","Data":"714c17223da5d712a59a590614feae91dc19139a848aa2d6a0ec2b9c27776d0f"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.071114 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.137763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.137820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6dmc\" (UniqueName: \"kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.137905 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.137945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138009 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138068 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138206 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd\") pod \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\" (UID: \"f9da428d-f8bc-4aa5-a24e-2145450ef4a4\") " Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.138758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.139021 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.139038 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.143474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc" (OuterVolumeSpecName: "kube-api-access-w6dmc") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "kube-api-access-w6dmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.146981 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts" (OuterVolumeSpecName: "scripts") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.184487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.195768 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.224661 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.240162 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.240191 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.240201 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6dmc\" (UniqueName: \"kubernetes.io/projected/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-kube-api-access-w6dmc\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.240215 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.240224 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.249232 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data" (OuterVolumeSpecName: "config-data") pod "f9da428d-f8bc-4aa5-a24e-2145450ef4a4" (UID: "f9da428d-f8bc-4aa5-a24e-2145450ef4a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.342279 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9da428d-f8bc-4aa5-a24e-2145450ef4a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.364861 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"47d90a3736efa4b09c295da65b5e9ca41c0c7bf43f930e89b8fb8fbd04eb9489"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.368325 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"16dcb730ae5c57628c71b57f173b4a36cdea9904a631e987faffb7916c8348b1"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.374363 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9da428d-f8bc-4aa5-a24e-2145450ef4a4","Type":"ContainerDied","Data":"a4743bdb0cba0eae8d446d0acd4673e97dc6d0eea53cf35efd0468c4b558b4bc"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.374424 4768 scope.go:117] "RemoveContainer" containerID="e9cdcfd2d165b6357ea34b8fefee3246918e1fd41031d6e4c894903be5a273e8" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.374560 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.387793 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" event={"ID":"63ae2678-f257-4fe9-b15c-72c7171320ad","Type":"ContainerStarted","Data":"9b39fb665310ba9e8722b968f15960332d4a4f4db8ead5a1de7d392565bda217"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.398088 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d"} Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.408400 4768 scope.go:117] "RemoveContainer" containerID="d4a75414ec1fc1ad510b117a5f8f40b6c9a7bd1da033b2dae79b80379e047484" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.450537 4768 scope.go:117] "RemoveContainer" containerID="5d47b729609f8c5190eebb4af66899be4d4f2f2da1a69ed74dcc7de99981a9a8" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.464773 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" podStartSLOduration=1.9397025270000001 podStartE2EDuration="12.46475199s" podCreationTimestamp="2025-11-24 17:09:30 +0000 UTC" firstStartedPulling="2025-11-24 17:09:31.2991273 +0000 UTC m=+1052.546095958" lastFinishedPulling="2025-11-24 17:09:41.824176763 +0000 UTC m=+1063.071145421" observedRunningTime="2025-11-24 17:09:42.420368225 +0000 UTC m=+1063.667336883" watchObservedRunningTime="2025-11-24 17:09:42.46475199 +0000 UTC m=+1063.711720648" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.477333 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.494175 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.521773 4768 scope.go:117] "RemoveContainer" containerID="714c17223da5d712a59a590614feae91dc19139a848aa2d6a0ec2b9c27776d0f" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.521986 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:42 crc kubenswrapper[4768]: E1124 17:09:42.522452 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="proxy-httpd" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.522749 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="proxy-httpd" Nov 24 17:09:42 crc kubenswrapper[4768]: E1124 17:09:42.522983 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-central-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.523199 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-central-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: E1124 17:09:42.523769 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-notification-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.524075 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-notification-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: E1124 17:09:42.524169 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="sg-core" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.524243 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="sg-core" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.525784 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="sg-core" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.525888 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-central-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.525986 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="ceilometer-notification-agent" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.526093 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" containerName="proxy-httpd" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.528890 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.532381 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.532848 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.533023 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.533136 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.656764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.656998 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657049 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657095 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657133 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t8j\" (UniqueName: \"kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.657450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84t8j\" (UniqueName: \"kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759683 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.759950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.760029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.760080 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.760446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.760675 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.765778 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.766856 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.782108 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.782561 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.783435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.785653 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84t8j\" (UniqueName: \"kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j\") pod \"ceilometer-0\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " pod="openstack/ceilometer-0" Nov 24 17:09:42 crc kubenswrapper[4768]: I1124 17:09:42.849125 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:09:43 crc kubenswrapper[4768]: I1124 17:09:43.300007 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:43 crc kubenswrapper[4768]: W1124 17:09:43.307387 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e27086a_b3f1_4532_b288_9bac47e38944.slice/crio-b881f2f026771771b76096ecdf0e78e380025b22e6ec392e1b42fe855064f2d4 WatchSource:0}: Error finding container b881f2f026771771b76096ecdf0e78e380025b22e6ec392e1b42fe855064f2d4: Status 404 returned error can't find the container with id b881f2f026771771b76096ecdf0e78e380025b22e6ec392e1b42fe855064f2d4 Nov 24 17:09:43 crc kubenswrapper[4768]: I1124 17:09:43.409774 4768 generic.go:334] "Generic (PLEG): container finished" podID="4150a56f-5273-4601-8abd-53554fee9e46" containerID="16dcb730ae5c57628c71b57f173b4a36cdea9904a631e987faffb7916c8348b1" exitCode=0 Nov 24 17:09:43 crc kubenswrapper[4768]: I1124 17:09:43.409834 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerDied","Data":"16dcb730ae5c57628c71b57f173b4a36cdea9904a631e987faffb7916c8348b1"} Nov 24 17:09:43 crc kubenswrapper[4768]: I1124 17:09:43.412050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerStarted","Data":"b881f2f026771771b76096ecdf0e78e380025b22e6ec392e1b42fe855064f2d4"} Nov 24 17:09:43 crc kubenswrapper[4768]: I1124 17:09:43.612570 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9da428d-f8bc-4aa5-a24e-2145450ef4a4" path="/var/lib/kubelet/pods/f9da428d-f8bc-4aa5-a24e-2145450ef4a4/volumes" Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.427764 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerStarted","Data":"32565bc959dd0d8638b8a9c16beb66fe80e8480553a2324b0a546a9067af26cc"} Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.437680 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"2e2f17449f7039559e23055821c8bfca243b081737f1b884c1829a30e990411d"} Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.437726 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"1e763f75fe64c411f72935c6bf8447453a64617a82b4be695c2879afd2cacaba"} Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.521612 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.557208 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.558380 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-log" containerID="cri-o://79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df" gracePeriod=30 Nov 24 17:09:44 crc kubenswrapper[4768]: I1124 17:09:44.558523 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-httpd" containerID="cri-o://72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be" gracePeriod=30 Nov 24 17:09:45 crc kubenswrapper[4768]: I1124 17:09:45.450721 4768 generic.go:334] "Generic (PLEG): container finished" podID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerID="79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df" exitCode=143 Nov 24 17:09:45 crc kubenswrapper[4768]: I1124 17:09:45.450811 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerDied","Data":"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df"} Nov 24 17:09:45 crc kubenswrapper[4768]: I1124 17:09:45.453559 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerStarted","Data":"696b9c29ccefbb4166f903c633d8ccab74be755e3b04fe83a9a75e3bf13ae8b5"} Nov 24 17:09:45 crc kubenswrapper[4768]: I1124 17:09:45.456580 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"f291c2b30279a53e222c1c422dc58f68227da89b8872033fec739525c4ad2b5c"} Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.480858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"db99def0c566862a06202bc396a570c10d4edb51c77f8f2f4d2949cecda045a8"} Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.485464 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.549882 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.550107 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-log" containerID="cri-o://2ad5ec824ee5334baae153e8bf0deda8a0353c87f9ab8af6aa26ef61a6df8bb6" gracePeriod=30 Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.550252 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-httpd" containerID="cri-o://781b648f95fba0a21385dde8435f3d2e3be4edb0dfb2d49ae1b282b12b20427b" gracePeriod=30 Nov 24 17:09:46 crc kubenswrapper[4768]: I1124 17:09:46.560271 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.010587615 podStartE2EDuration="14.560252736s" podCreationTimestamp="2025-11-24 17:09:32 +0000 UTC" firstStartedPulling="2025-11-24 17:09:34.269227773 +0000 UTC m=+1055.516196421" lastFinishedPulling="2025-11-24 17:09:41.818892884 +0000 UTC m=+1063.065861542" observedRunningTime="2025-11-24 17:09:46.555867782 +0000 UTC m=+1067.802836440" watchObservedRunningTime="2025-11-24 17:09:46.560252736 +0000 UTC m=+1067.807221394" Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:46.999758 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.491953 4768 generic.go:334] "Generic (PLEG): container finished" podID="4150a56f-5273-4601-8abd-53554fee9e46" containerID="2e2f17449f7039559e23055821c8bfca243b081737f1b884c1829a30e990411d" exitCode=0 Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.492186 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerDied","Data":"2e2f17449f7039559e23055821c8bfca243b081737f1b884c1829a30e990411d"} Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.492988 4768 scope.go:117] "RemoveContainer" containerID="2e2f17449f7039559e23055821c8bfca243b081737f1b884c1829a30e990411d" Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.495709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerStarted","Data":"f72247f294691b11b929b4e58714109eed076cc3172a18400938a2a87dca8c9a"} Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.500036 4768 generic.go:334] "Generic (PLEG): container finished" podID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerID="2ad5ec824ee5334baae153e8bf0deda8a0353c87f9ab8af6aa26ef61a6df8bb6" exitCode=143 Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.500086 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerDied","Data":"2ad5ec824ee5334baae153e8bf0deda8a0353c87f9ab8af6aa26ef61a6df8bb6"} Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.732460 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.732515 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Nov 24 17:09:47 crc kubenswrapper[4768]: I1124 17:09:47.732527 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.483110 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.546654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84"} Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.552394 4768 generic.go:334] "Generic (PLEG): container finished" podID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerID="72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be" exitCode=0 Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.552469 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerDied","Data":"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be"} Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.552503 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3bdd840c-08db-42db-bd50-4f14b4dffbda","Type":"ContainerDied","Data":"499656a337329f0593fd3450efc69319e19cf6948ac74a665827ed786e96abf0"} Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.552524 4768 scope.go:117] "RemoveContainer" containerID="72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.552683 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.571501 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerStarted","Data":"4862fa73ef3cd69111d9d2154a09525a5dfff19ffe3810ca4156d4346115bb33"} Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.571707 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-central-agent" containerID="cri-o://32565bc959dd0d8638b8a9c16beb66fe80e8480553a2324b0a546a9067af26cc" gracePeriod=30 Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.572005 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.572057 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="proxy-httpd" containerID="cri-o://4862fa73ef3cd69111d9d2154a09525a5dfff19ffe3810ca4156d4346115bb33" gracePeriod=30 Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.572111 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="sg-core" containerID="cri-o://f72247f294691b11b929b4e58714109eed076cc3172a18400938a2a87dca8c9a" gracePeriod=30 Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.572162 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-notification-agent" containerID="cri-o://696b9c29ccefbb4166f903c633d8ccab74be755e3b04fe83a9a75e3bf13ae8b5" gracePeriod=30 Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.585874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.585961 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586130 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586252 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586272 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.586297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7dk\" (UniqueName: \"kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk\") pod \"3bdd840c-08db-42db-bd50-4f14b4dffbda\" (UID: \"3bdd840c-08db-42db-bd50-4f14b4dffbda\") " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.588507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs" (OuterVolumeSpecName: "logs") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.599046 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk" (OuterVolumeSpecName: "kube-api-access-dp7dk") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "kube-api-access-dp7dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.599703 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.602531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts" (OuterVolumeSpecName: "scripts") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.642745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.657008 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.819459463 podStartE2EDuration="6.656983612s" podCreationTimestamp="2025-11-24 17:09:42 +0000 UTC" firstStartedPulling="2025-11-24 17:09:43.31097411 +0000 UTC m=+1064.557942768" lastFinishedPulling="2025-11-24 17:09:48.148498259 +0000 UTC m=+1069.395466917" observedRunningTime="2025-11-24 17:09:48.618768192 +0000 UTC m=+1069.865736850" watchObservedRunningTime="2025-11-24 17:09:48.656983612 +0000 UTC m=+1069.903952290" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.696474 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.696510 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.696520 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.696530 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bdd840c-08db-42db-bd50-4f14b4dffbda-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.696543 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7dk\" (UniqueName: \"kubernetes.io/projected/3bdd840c-08db-42db-bd50-4f14b4dffbda-kube-api-access-dp7dk\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.713018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.728227 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.746484 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.763714 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data" (OuterVolumeSpecName: "config-data") pod "3bdd840c-08db-42db-bd50-4f14b4dffbda" (UID: "3bdd840c-08db-42db-bd50-4f14b4dffbda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.798165 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.798202 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.798211 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd840c-08db-42db-bd50-4f14b4dffbda-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.798221 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.800080 4768 scope.go:117] "RemoveContainer" containerID="79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.827667 4768 scope.go:117] "RemoveContainer" containerID="72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be" Nov 24 17:09:48 crc kubenswrapper[4768]: E1124 17:09:48.829856 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be\": container with ID starting with 72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be not found: ID does not exist" containerID="72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.829905 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be"} err="failed to get container status \"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be\": rpc error: code = NotFound desc = could not find container \"72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be\": container with ID starting with 72d0aec70e92f2784e1c5d3b95cbd1f80306b398367150b8a78b6fcbe8a857be not found: ID does not exist" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.829930 4768 scope.go:117] "RemoveContainer" containerID="79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df" Nov 24 17:09:48 crc kubenswrapper[4768]: E1124 17:09:48.834818 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df\": container with ID starting with 79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df not found: ID does not exist" containerID="79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.834864 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df"} err="failed to get container status \"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df\": rpc error: code = NotFound desc = could not find container \"79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df\": container with ID starting with 79e810ba5ac54f56725cdc2354c425fc340f5accbb407f85b1156e27a4b166df not found: ID does not exist" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.892238 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.906304 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.915900 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:48 crc kubenswrapper[4768]: E1124 17:09:48.916279 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-httpd" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.916298 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-httpd" Nov 24 17:09:48 crc kubenswrapper[4768]: E1124 17:09:48.916315 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-log" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.916321 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-log" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.916505 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-httpd" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.916536 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" containerName="glance-log" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.918490 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.922910 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.924173 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 17:09:48 crc kubenswrapper[4768]: I1124 17:09:48.928164 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.000992 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.001035 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.001065 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-scripts\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.001166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.008222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-logs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.008461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-config-data\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.008535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.009715 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6tf2\" (UniqueName: \"kubernetes.io/projected/3df486b9-bc37-4240-9ed2-76dc84b54031-kube-api-access-m6tf2\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-logs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111250 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-config-data\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111288 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6tf2\" (UniqueName: \"kubernetes.io/projected/3df486b9-bc37-4240-9ed2-76dc84b54031-kube-api-access-m6tf2\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111408 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-scripts\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111713 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-logs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.111999 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.112287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3df486b9-bc37-4240-9ed2-76dc84b54031-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.116905 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-scripts\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.118424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.118471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.121116 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3df486b9-bc37-4240-9ed2-76dc84b54031-config-data\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.132337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6tf2\" (UniqueName: \"kubernetes.io/projected/3df486b9-bc37-4240-9ed2-76dc84b54031-kube-api-access-m6tf2\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.146226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"3df486b9-bc37-4240-9ed2-76dc84b54031\") " pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.238786 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.590849 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e27086a-b3f1-4532-b288-9bac47e38944" containerID="f72247f294691b11b929b4e58714109eed076cc3172a18400938a2a87dca8c9a" exitCode=2 Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.591095 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e27086a-b3f1-4532-b288-9bac47e38944" containerID="696b9c29ccefbb4166f903c633d8ccab74be755e3b04fe83a9a75e3bf13ae8b5" exitCode=0 Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.591707 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdd840c-08db-42db-bd50-4f14b4dffbda" path="/var/lib/kubelet/pods/3bdd840c-08db-42db-bd50-4f14b4dffbda/volumes" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.592334 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerDied","Data":"f72247f294691b11b929b4e58714109eed076cc3172a18400938a2a87dca8c9a"} Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.592378 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerDied","Data":"696b9c29ccefbb4166f903c633d8ccab74be755e3b04fe83a9a75e3bf13ae8b5"} Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.600570 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Nov 24 17:09:49 crc kubenswrapper[4768]: I1124 17:09:49.834938 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 17:09:49 crc kubenswrapper[4768]: W1124 17:09:49.845435 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df486b9_bc37_4240_9ed2_76dc84b54031.slice/crio-ca01ff3521ab221114208a41fa27ac347b7e4eae9866fe759dfb8fb40de7e6db WatchSource:0}: Error finding container ca01ff3521ab221114208a41fa27ac347b7e4eae9866fe759dfb8fb40de7e6db: Status 404 returned error can't find the container with id ca01ff3521ab221114208a41fa27ac347b7e4eae9866fe759dfb8fb40de7e6db Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.633823 4768 generic.go:334] "Generic (PLEG): container finished" podID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerID="781b648f95fba0a21385dde8435f3d2e3be4edb0dfb2d49ae1b282b12b20427b" exitCode=0 Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.633887 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerDied","Data":"781b648f95fba0a21385dde8435f3d2e3be4edb0dfb2d49ae1b282b12b20427b"} Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.637361 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3df486b9-bc37-4240-9ed2-76dc84b54031","Type":"ContainerStarted","Data":"52d963f14e9887d4f8b335cd9e7df0030aaba14734678bc56fb515da69dd44e1"} Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.637394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3df486b9-bc37-4240-9ed2-76dc84b54031","Type":"ContainerStarted","Data":"ca01ff3521ab221114208a41fa27ac347b7e4eae9866fe759dfb8fb40de7e6db"} Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.798610 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.960774 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.960848 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.960887 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961008 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x49p\" (UniqueName: \"kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961044 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961066 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961147 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961178 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts\") pod \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\" (UID: \"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7\") " Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs" (OuterVolumeSpecName: "logs") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.961804 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.966697 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.967973 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts" (OuterVolumeSpecName: "scripts") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:50 crc kubenswrapper[4768]: I1124 17:09:50.973927 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p" (OuterVolumeSpecName: "kube-api-access-9x49p") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "kube-api-access-9x49p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.007962 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.051113 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063256 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063292 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063302 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063356 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063369 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x49p\" (UniqueName: \"kubernetes.io/projected/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-kube-api-access-9x49p\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063400 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.063412 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.076417 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data" (OuterVolumeSpecName: "config-data") pod "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" (UID: "21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.087276 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.165335 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.165392 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.650923 4768 generic.go:334] "Generic (PLEG): container finished" podID="4150a56f-5273-4601-8abd-53554fee9e46" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" exitCode=0 Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.650947 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerDied","Data":"6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84"} Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.651323 4768 scope.go:117] "RemoveContainer" containerID="2e2f17449f7039559e23055821c8bfca243b081737f1b884c1829a30e990411d" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.652268 4768 scope.go:117] "RemoveContainer" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" Nov 24 17:09:51 crc kubenswrapper[4768]: E1124 17:09:51.652614 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(4150a56f-5273-4601-8abd-53554fee9e46)\"" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.653396 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7","Type":"ContainerDied","Data":"65f0f8155e77589ab96edf50b44dd24bb2a5e1390d2dfdc7fc943def4e64f7c7"} Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.653428 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.723092 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.741486 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.756133 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:51 crc kubenswrapper[4768]: E1124 17:09:51.756562 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-log" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.756582 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-log" Nov 24 17:09:51 crc kubenswrapper[4768]: E1124 17:09:51.756626 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-httpd" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.756636 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-httpd" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.756931 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-log" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.756953 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" containerName="glance-httpd" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.758151 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.767228 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.767369 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.770841 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.829667 4768 scope.go:117] "RemoveContainer" containerID="781b648f95fba0a21385dde8435f3d2e3be4edb0dfb2d49ae1b282b12b20427b" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.853305 4768 scope.go:117] "RemoveContainer" containerID="2ad5ec824ee5334baae153e8bf0deda8a0353c87f9ab8af6aa26ef61a6df8bb6" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.876722 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.876835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.876896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.877113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.877178 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.877254 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xr8n\" (UniqueName: \"kubernetes.io/projected/6eb8b800-a966-48fe-8075-4709302ee14d-kube-api-access-5xr8n\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.877304 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.877378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xr8n\" (UniqueName: \"kubernetes.io/projected/6eb8b800-a966-48fe-8075-4709302ee14d-kube-api-access-5xr8n\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978750 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978777 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978907 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.978925 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.979298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:51 crc kubenswrapper[4768]: I1124 17:09:51.979898 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eb8b800-a966-48fe-8075-4709302ee14d-logs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.003848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.004001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.004084 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.010694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xr8n\" (UniqueName: \"kubernetes.io/projected/6eb8b800-a966-48fe-8075-4709302ee14d-kube-api-access-5xr8n\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.029160 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb8b800-a966-48fe-8075-4709302ee14d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.040408 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"6eb8b800-a966-48fe-8075-4709302ee14d\") " pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.076860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.664609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3df486b9-bc37-4240-9ed2-76dc84b54031","Type":"ContainerStarted","Data":"877b3807b4e5ee74e0fe0c9d52cb2366121031ae0a84d29ffea21ebf53d39e89"} Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.694630 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.694600211 podStartE2EDuration="4.694600211s" podCreationTimestamp="2025-11-24 17:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:09:52.690936287 +0000 UTC m=+1073.937904965" watchObservedRunningTime="2025-11-24 17:09:52.694600211 +0000 UTC m=+1073.941568869" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.732843 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.733866 4768 scope.go:117] "RemoveContainer" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.734066 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:09:52 crc kubenswrapper[4768]: E1124 17:09:52.734104 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(4150a56f-5273-4601-8abd-53554fee9e46)\"" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.734133 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.734149 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.734158 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.744319 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 17:09:52 crc kubenswrapper[4768]: I1124 17:09:52.749467 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 17:09:53 crc kubenswrapper[4768]: I1124 17:09:53.597422 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7" path="/var/lib/kubelet/pods/21ec6fe8-8b5a-4ebd-89a9-459fd8f109d7/volumes" Nov 24 17:09:53 crc kubenswrapper[4768]: I1124 17:09:53.695945 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6eb8b800-a966-48fe-8075-4709302ee14d","Type":"ContainerStarted","Data":"1569fa27c03cc3ccc2065b975b1112280950601c3790ce14899ed23207ff660a"} Nov 24 17:09:53 crc kubenswrapper[4768]: I1124 17:09:53.697388 4768 scope.go:117] "RemoveContainer" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" Nov 24 17:09:53 crc kubenswrapper[4768]: E1124 17:09:53.697789 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(4150a56f-5273-4601-8abd-53554fee9e46)\"" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" Nov 24 17:09:54 crc kubenswrapper[4768]: I1124 17:09:54.713813 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6eb8b800-a966-48fe-8075-4709302ee14d","Type":"ContainerStarted","Data":"e46fe2e2a118c62f533744f5bd619763bd8056390586d3f1267a1fa5e6da44f9"} Nov 24 17:09:54 crc kubenswrapper[4768]: I1124 17:09:54.714249 4768 scope.go:117] "RemoveContainer" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" Nov 24 17:09:54 crc kubenswrapper[4768]: E1124 17:09:54.714573 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-inspector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-inspector pod=ironic-inspector-0_openstack(4150a56f-5273-4601-8abd-53554fee9e46)\"" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" Nov 24 17:09:55 crc kubenswrapper[4768]: I1124 17:09:55.726203 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6eb8b800-a966-48fe-8075-4709302ee14d","Type":"ContainerStarted","Data":"bfd22dffae43a5b7b4da7148115141965a6b28dee3451a5a70254e99579eb611"} Nov 24 17:09:55 crc kubenswrapper[4768]: I1124 17:09:55.747055 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.747033561 podStartE2EDuration="4.747033561s" podCreationTimestamp="2025-11-24 17:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:09:55.745169078 +0000 UTC m=+1076.992137726" watchObservedRunningTime="2025-11-24 17:09:55.747033561 +0000 UTC m=+1076.994002229" Nov 24 17:09:56 crc kubenswrapper[4768]: I1124 17:09:56.735569 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e27086a-b3f1-4532-b288-9bac47e38944" containerID="32565bc959dd0d8638b8a9c16beb66fe80e8480553a2324b0a546a9067af26cc" exitCode=0 Nov 24 17:09:56 crc kubenswrapper[4768]: I1124 17:09:56.735631 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerDied","Data":"32565bc959dd0d8638b8a9c16beb66fe80e8480553a2324b0a546a9067af26cc"} Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.240173 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.240636 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.271388 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.290811 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.764614 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 17:09:59 crc kubenswrapper[4768]: I1124 17:09:59.764860 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 17:10:01 crc kubenswrapper[4768]: I1124 17:10:01.610667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 17:10:01 crc kubenswrapper[4768]: I1124 17:10:01.713855 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.077784 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.077851 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.112837 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.145566 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.790501 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-inspector-0" podUID="4150a56f-5273-4601-8abd-53554fee9e46" containerName="ironic-inspector-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.810341 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:02 crc kubenswrapper[4768]: I1124 17:10:02.810418 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:04 crc kubenswrapper[4768]: I1124 17:10:04.777966 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:04 crc kubenswrapper[4768]: I1124 17:10:04.820671 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 17:10:04 crc kubenswrapper[4768]: I1124 17:10:04.829232 4768 generic.go:334] "Generic (PLEG): container finished" podID="63ae2678-f257-4fe9-b15c-72c7171320ad" containerID="9b39fb665310ba9e8722b968f15960332d4a4f4db8ead5a1de7d392565bda217" exitCode=0 Nov 24 17:10:04 crc kubenswrapper[4768]: I1124 17:10:04.829391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" event={"ID":"63ae2678-f257-4fe9-b15c-72c7171320ad","Type":"ContainerDied","Data":"9b39fb665310ba9e8722b968f15960332d4a4f4db8ead5a1de7d392565bda217"} Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.244971 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.362693 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n55pb\" (UniqueName: \"kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb\") pod \"63ae2678-f257-4fe9-b15c-72c7171320ad\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.362875 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts\") pod \"63ae2678-f257-4fe9-b15c-72c7171320ad\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.362917 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle\") pod \"63ae2678-f257-4fe9-b15c-72c7171320ad\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.362945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data\") pod \"63ae2678-f257-4fe9-b15c-72c7171320ad\" (UID: \"63ae2678-f257-4fe9-b15c-72c7171320ad\") " Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.369202 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb" (OuterVolumeSpecName: "kube-api-access-n55pb") pod "63ae2678-f257-4fe9-b15c-72c7171320ad" (UID: "63ae2678-f257-4fe9-b15c-72c7171320ad"). InnerVolumeSpecName "kube-api-access-n55pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.371165 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts" (OuterVolumeSpecName: "scripts") pod "63ae2678-f257-4fe9-b15c-72c7171320ad" (UID: "63ae2678-f257-4fe9-b15c-72c7171320ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.402492 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63ae2678-f257-4fe9-b15c-72c7171320ad" (UID: "63ae2678-f257-4fe9-b15c-72c7171320ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.403234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data" (OuterVolumeSpecName: "config-data") pod "63ae2678-f257-4fe9-b15c-72c7171320ad" (UID: "63ae2678-f257-4fe9-b15c-72c7171320ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.465318 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.465487 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.465576 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ae2678-f257-4fe9-b15c-72c7171320ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.465639 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n55pb\" (UniqueName: \"kubernetes.io/projected/63ae2678-f257-4fe9-b15c-72c7171320ad-kube-api-access-n55pb\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.853443 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" event={"ID":"63ae2678-f257-4fe9-b15c-72c7171320ad","Type":"ContainerDied","Data":"a7c5f200a31c26673767845e6674a4eda98c41db6ed9cdc8551d34b5f0addfc7"} Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.853846 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c5f200a31c26673767845e6674a4eda98c41db6ed9cdc8551d34b5f0addfc7" Nov 24 17:10:06 crc kubenswrapper[4768]: I1124 17:10:06.853483 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6mx2x" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.013391 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 17:10:07 crc kubenswrapper[4768]: E1124 17:10:07.013827 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ae2678-f257-4fe9-b15c-72c7171320ad" containerName="nova-cell0-conductor-db-sync" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.013846 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ae2678-f257-4fe9-b15c-72c7171320ad" containerName="nova-cell0-conductor-db-sync" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.014013 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ae2678-f257-4fe9-b15c-72c7171320ad" containerName="nova-cell0-conductor-db-sync" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.014653 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.016159 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7k9dw" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.018982 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.034197 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.081160 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9qc\" (UniqueName: \"kubernetes.io/projected/aaf1fd30-6ac7-4418-93f7-cf24adacd921-kube-api-access-8n9qc\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.081228 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.091629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.194058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9qc\" (UniqueName: \"kubernetes.io/projected/aaf1fd30-6ac7-4418-93f7-cf24adacd921-kube-api-access-8n9qc\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.194259 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.194409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.201588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.202382 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaf1fd30-6ac7-4418-93f7-cf24adacd921-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.208797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9qc\" (UniqueName: \"kubernetes.io/projected/aaf1fd30-6ac7-4418-93f7-cf24adacd921-kube-api-access-8n9qc\") pod \"nova-cell0-conductor-0\" (UID: \"aaf1fd30-6ac7-4418-93f7-cf24adacd921\") " pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.380724 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:07 crc kubenswrapper[4768]: I1124 17:10:07.888596 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 17:10:08 crc kubenswrapper[4768]: I1124 17:10:08.581257 4768 scope.go:117] "RemoveContainer" containerID="6094cd6b731f3ff0a4db0c658db6cbdd83434479a24625b3f06afea64fc48d84" Nov 24 17:10:08 crc kubenswrapper[4768]: I1124 17:10:08.876046 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaf1fd30-6ac7-4418-93f7-cf24adacd921","Type":"ContainerStarted","Data":"47cc32b4e47fe6e1721b42b85217506163383eaeae3ff7be2990361cbe5c3c2f"} Nov 24 17:10:08 crc kubenswrapper[4768]: I1124 17:10:08.876090 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aaf1fd30-6ac7-4418-93f7-cf24adacd921","Type":"ContainerStarted","Data":"64c04c433926b97e3162633007459957c7e2530d18a79d1b86f8e00e1e574b20"} Nov 24 17:10:08 crc kubenswrapper[4768]: I1124 17:10:08.876276 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:09 crc kubenswrapper[4768]: I1124 17:10:09.892841 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"4150a56f-5273-4601-8abd-53554fee9e46","Type":"ContainerStarted","Data":"6fab527565e76d6a1ddaa205a6950d5299bf4dec41b527428364d36d817fc200"} Nov 24 17:10:09 crc kubenswrapper[4768]: I1124 17:10:09.928835 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.928817007 podStartE2EDuration="3.928817007s" podCreationTimestamp="2025-11-24 17:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:08.89944167 +0000 UTC m=+1090.146410338" watchObservedRunningTime="2025-11-24 17:10:09.928817007 +0000 UTC m=+1091.175785665" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.732841 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.733323 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.767939 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.772848 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.857680 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.927408 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Nov 24 17:10:12 crc kubenswrapper[4768]: I1124 17:10:12.932867 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.417640 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.942941 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qdb22"] Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.944731 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.946435 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.947808 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 17:10:17 crc kubenswrapper[4768]: I1124 17:10:17.967979 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdb22"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.028663 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.028716 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.028852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.028916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrz6g\" (UniqueName: \"kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.106185 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.107560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.108946 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.126899 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.130666 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrz6g\" (UniqueName: \"kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.130818 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.130844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.130910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.137276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.138243 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.168901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.169396 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrz6g\" (UniqueName: \"kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g\") pod \"nova-cell0-cell-mapping-qdb22\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.198173 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.199472 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.205841 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.208943 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.235094 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.235168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.235246 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fn4k\" (UniqueName: \"kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.267865 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.271199 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.300411 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.307049 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.325730 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fn4k\" (UniqueName: \"kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341549 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341583 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.341647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f44hl\" (UniqueName: \"kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.355732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.363973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.378081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fn4k\" (UniqueName: \"kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k\") pod \"nova-cell1-novncproxy-0\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.378901 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.389507 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.403489 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.428121 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.428722 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443245 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f44hl\" (UniqueName: \"kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443581 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbdg7\" (UniqueName: \"kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.443705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.458881 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.461975 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.465893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f44hl\" (UniqueName: \"kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl\") pod \"nova-scheduler-0\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.524489 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.526584 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.534474 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546199 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2g7m\" (UniqueName: \"kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546267 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546336 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546369 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbdg7\" (UniqueName: \"kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546426 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.546909 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.552511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.553091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.561167 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbdg7\" (UniqueName: \"kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7\") pod \"nova-api-0\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.568523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648412 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5j4\" (UniqueName: \"kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648450 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648491 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648631 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2g7m\" (UniqueName: \"kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.648722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.650081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.658009 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.658605 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.666098 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2g7m\" (UniqueName: \"kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m\") pod \"nova-metadata-0\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.757035 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758322 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5j4\" (UniqueName: \"kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758386 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758406 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758430 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.758567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.759231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.760200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.760388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.760634 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.760927 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.772537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.780219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5j4\" (UniqueName: \"kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4\") pod \"dnsmasq-dns-757b4f8459-w52kt\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.857049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:18 crc kubenswrapper[4768]: I1124 17:10:18.930472 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdb22"] Nov 24 17:10:18 crc kubenswrapper[4768]: W1124 17:10:18.937633 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f88274f_db1f_4ab0_88bf_12a230c0c5e6.slice/crio-5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b WatchSource:0}: Error finding container 5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b: Status 404 returned error can't find the container with id 5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.004695 4768 generic.go:334] "Generic (PLEG): container finished" podID="9e27086a-b3f1-4532-b288-9bac47e38944" containerID="4862fa73ef3cd69111d9d2154a09525a5dfff19ffe3810ca4156d4346115bb33" exitCode=137 Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.004775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerDied","Data":"4862fa73ef3cd69111d9d2154a09525a5dfff19ffe3810ca4156d4346115bb33"} Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.023486 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qhvz6"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.027936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdb22" event={"ID":"8f88274f-db1f-4ab0-88bf-12a230c0c5e6","Type":"ContainerStarted","Data":"5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b"} Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.028085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.035187 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.035304 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.050242 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qhvz6"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.058590 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.134656 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.171415 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.171546 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.171568 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.171637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwvht\" (UniqueName: \"kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.212948 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273680 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273753 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273806 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273930 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84t8j\" (UniqueName: \"kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.273993 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274064 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts\") pod \"9e27086a-b3f1-4532-b288-9bac47e38944\" (UID: \"9e27086a-b3f1-4532-b288-9bac47e38944\") " Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274386 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274504 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274850 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.274882 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.275022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwvht\" (UniqueName: \"kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.275186 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.275202 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e27086a-b3f1-4532-b288-9bac47e38944-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.280933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.284174 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.284201 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.293906 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwvht\" (UniqueName: \"kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht\") pod \"nova-cell1-conductor-db-sync-qhvz6\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.304994 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j" (OuterVolumeSpecName: "kube-api-access-84t8j") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "kube-api-access-84t8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.311540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts" (OuterVolumeSpecName: "scripts") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.337805 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.354511 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.371004 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.377016 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84t8j\" (UniqueName: \"kubernetes.io/projected/9e27086a-b3f1-4532-b288-9bac47e38944-kube-api-access-84t8j\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.382598 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.383114 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.385587 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.396377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.420873 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.451431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data" (OuterVolumeSpecName: "config-data") pod "9e27086a-b3f1-4532-b288-9bac47e38944" (UID: "9e27086a-b3f1-4532-b288-9bac47e38944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.478097 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.485146 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.485189 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.485207 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e27086a-b3f1-4532-b288-9bac47e38944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:19 crc kubenswrapper[4768]: I1124 17:10:19.866736 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qhvz6"] Nov 24 17:10:19 crc kubenswrapper[4768]: W1124 17:10:19.874151 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd9e2e0_6b33_444a_a253_1d4e75a13681.slice/crio-b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7 WatchSource:0}: Error finding container b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7: Status 404 returned error can't find the container with id b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7 Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.051717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e27086a-b3f1-4532-b288-9bac47e38944","Type":"ContainerDied","Data":"b881f2f026771771b76096ecdf0e78e380025b22e6ec392e1b42fe855064f2d4"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.051780 4768 scope.go:117] "RemoveContainer" containerID="4862fa73ef3cd69111d9d2154a09525a5dfff19ffe3810ca4156d4346115bb33" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.051787 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.054512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec","Type":"ContainerStarted","Data":"c0b81a32c8f5bd1645f5c66b1a0825ba7e8dc541984be6e191b65896d4789049"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.059123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerStarted","Data":"b98174682361bc0698c6125af0ce7cfe82d4a3ba1e88caede81d15f08bfb496b"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.068748 4768 generic.go:334] "Generic (PLEG): container finished" podID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerID="b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1" exitCode=0 Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.068846 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" event={"ID":"594abc42-5146-4e9e-b9ed-a2c4e74de54b","Type":"ContainerDied","Data":"b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.068871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" event={"ID":"594abc42-5146-4e9e-b9ed-a2c4e74de54b","Type":"ContainerStarted","Data":"e425b8cc4e339ee2d6da9f7b84346805bd132bf881192c2ddcc492292b341577"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.081884 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdb22" event={"ID":"8f88274f-db1f-4ab0-88bf-12a230c0c5e6","Type":"ContainerStarted","Data":"dfd7d81cd6a5b8d2d1ae543a561e8e296c666380d0962e47e63eeb5721135f26"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.098787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" event={"ID":"bfd9e2e0-6b33-444a-a253-1d4e75a13681","Type":"ContainerStarted","Data":"b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.104442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0d523be-42d4-491c-9e07-3b76db03250c","Type":"ContainerStarted","Data":"f6cfd4ecc628cef5ded45d4bc480fd2211cf693a651f8f8b0432347778a52c16"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.107426 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.135530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerStarted","Data":"0e05bd484787c411c84a0d12adcb89f8d84493717f83e1490e48b7da473df5b9"} Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.147098 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.163438 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:10:20 crc kubenswrapper[4768]: E1124 17:10:20.164002 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="sg-core" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164023 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="sg-core" Nov 24 17:10:20 crc kubenswrapper[4768]: E1124 17:10:20.164041 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-central-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164048 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-central-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: E1124 17:10:20.164055 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-notification-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164065 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-notification-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: E1124 17:10:20.164101 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="proxy-httpd" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164112 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="proxy-httpd" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164295 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="proxy-httpd" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164309 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-notification-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164325 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="sg-core" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.164374 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" containerName="ceilometer-central-agent" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.166290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.168206 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.168386 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.170163 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.199457 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" podStartSLOduration=2.199441788 podStartE2EDuration="2.199441788s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:20.135983895 +0000 UTC m=+1101.382952553" watchObservedRunningTime="2025-11-24 17:10:20.199441788 +0000 UTC m=+1101.446410446" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.200850 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.211172 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qdb22" podStartSLOduration=3.2111604 podStartE2EDuration="3.2111604s" podCreationTimestamp="2025-11-24 17:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:20.167006621 +0000 UTC m=+1101.413975279" watchObservedRunningTime="2025-11-24 17:10:20.2111604 +0000 UTC m=+1101.458129058" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305823 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305882 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.305983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.306095 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjft\" (UniqueName: \"kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408242 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjft\" (UniqueName: \"kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408413 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.408450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.410054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.411157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.416042 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.416132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.419064 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.424701 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjft\" (UniqueName: \"kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.429268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.441025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " pod="openstack/ceilometer-0" Nov 24 17:10:20 crc kubenswrapper[4768]: I1124 17:10:20.516242 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:10:21 crc kubenswrapper[4768]: I1124 17:10:21.154956 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" event={"ID":"bfd9e2e0-6b33-444a-a253-1d4e75a13681","Type":"ContainerStarted","Data":"b103acf9ad0f3d26f8f85be888daf5cac6a3f66a63e1328fae01d08553aa855d"} Nov 24 17:10:21 crc kubenswrapper[4768]: I1124 17:10:21.599083 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e27086a-b3f1-4532-b288-9bac47e38944" path="/var/lib/kubelet/pods/9e27086a-b3f1-4532-b288-9bac47e38944/volumes" Nov 24 17:10:21 crc kubenswrapper[4768]: I1124 17:10:21.642419 4768 scope.go:117] "RemoveContainer" containerID="f72247f294691b11b929b4e58714109eed076cc3172a18400938a2a87dca8c9a" Nov 24 17:10:21 crc kubenswrapper[4768]: I1124 17:10:21.855156 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:21 crc kubenswrapper[4768]: I1124 17:10:21.869109 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:22 crc kubenswrapper[4768]: I1124 17:10:22.555857 4768 scope.go:117] "RemoveContainer" containerID="696b9c29ccefbb4166f903c633d8ccab74be755e3b04fe83a9a75e3bf13ae8b5" Nov 24 17:10:22 crc kubenswrapper[4768]: I1124 17:10:22.628169 4768 scope.go:117] "RemoveContainer" containerID="32565bc959dd0d8638b8a9c16beb66fe80e8480553a2324b0a546a9067af26cc" Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.147698 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:10:23 crc kubenswrapper[4768]: W1124 17:10:23.152407 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fc2880d_6b67_430a_8a36_6339821b2fb0.slice/crio-7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102 WatchSource:0}: Error finding container 7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102: Status 404 returned error can't find the container with id 7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102 Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.186070 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0d523be-42d4-491c-9e07-3b76db03250c","Type":"ContainerStarted","Data":"c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.192089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerStarted","Data":"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.203898 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec","Type":"ContainerStarted","Data":"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.203917 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc" gracePeriod=30 Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.206328 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerStarted","Data":"61bb31569e5e1aa10b53da4474c773b324742025d087b7beb5e91aba6a7f89d7"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.207539 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.796412136 podStartE2EDuration="5.207524595s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="2025-11-24 17:10:19.217136283 +0000 UTC m=+1100.464104941" lastFinishedPulling="2025-11-24 17:10:22.628248742 +0000 UTC m=+1103.875217400" observedRunningTime="2025-11-24 17:10:23.201919617 +0000 UTC m=+1104.448888295" watchObservedRunningTime="2025-11-24 17:10:23.207524595 +0000 UTC m=+1104.454493243" Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.209174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" event={"ID":"594abc42-5146-4e9e-b9ed-a2c4e74de54b","Type":"ContainerStarted","Data":"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.209800 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.224141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerStarted","Data":"7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102"} Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.229639 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.702757249 podStartE2EDuration="5.22961951s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="2025-11-24 17:10:19.068120801 +0000 UTC m=+1100.315089459" lastFinishedPulling="2025-11-24 17:10:22.594983052 +0000 UTC m=+1103.841951720" observedRunningTime="2025-11-24 17:10:23.218797324 +0000 UTC m=+1104.465765982" watchObservedRunningTime="2025-11-24 17:10:23.22961951 +0000 UTC m=+1104.476588168" Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.429744 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:23 crc kubenswrapper[4768]: I1124 17:10:23.569262 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.234386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerStarted","Data":"4998736fdf8f793a1a6c23e29d4b8530bde6b8bfd091b51f90f9b198156885ac"} Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.238228 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerStarted","Data":"67808bd1850913efbd3bbeb884890ac15880e29fe5927306718df859a79d328d"} Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.243272 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-log" containerID="cri-o://f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" gracePeriod=30 Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.243592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerStarted","Data":"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69"} Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.244238 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-metadata" containerID="cri-o://a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" gracePeriod=30 Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.257418 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" podStartSLOduration=6.25732752 podStartE2EDuration="6.25732752s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:23.236619108 +0000 UTC m=+1104.483587766" watchObservedRunningTime="2025-11-24 17:10:24.25732752 +0000 UTC m=+1105.504296188" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.262607 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.047825188 podStartE2EDuration="6.262587238s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="2025-11-24 17:10:19.443361167 +0000 UTC m=+1100.690329825" lastFinishedPulling="2025-11-24 17:10:22.658123217 +0000 UTC m=+1103.905091875" observedRunningTime="2025-11-24 17:10:24.250338152 +0000 UTC m=+1105.497306820" watchObservedRunningTime="2025-11-24 17:10:24.262587238 +0000 UTC m=+1105.509555916" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.279275 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.054636862 podStartE2EDuration="6.279245639s" podCreationTimestamp="2025-11-24 17:10:18 +0000 UTC" firstStartedPulling="2025-11-24 17:10:19.431590585 +0000 UTC m=+1100.678559243" lastFinishedPulling="2025-11-24 17:10:22.656199362 +0000 UTC m=+1103.903168020" observedRunningTime="2025-11-24 17:10:24.269913075 +0000 UTC m=+1105.516881723" watchObservedRunningTime="2025-11-24 17:10:24.279245639 +0000 UTC m=+1105.526214307" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.821514 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.899104 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle\") pod \"f00f2e97-93de-4e24-9495-e693bc6dee0a\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.899232 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data\") pod \"f00f2e97-93de-4e24-9495-e693bc6dee0a\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.899260 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2g7m\" (UniqueName: \"kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m\") pod \"f00f2e97-93de-4e24-9495-e693bc6dee0a\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.899453 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs\") pod \"f00f2e97-93de-4e24-9495-e693bc6dee0a\" (UID: \"f00f2e97-93de-4e24-9495-e693bc6dee0a\") " Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.900128 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs" (OuterVolumeSpecName: "logs") pod "f00f2e97-93de-4e24-9495-e693bc6dee0a" (UID: "f00f2e97-93de-4e24-9495-e693bc6dee0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.908590 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m" (OuterVolumeSpecName: "kube-api-access-b2g7m") pod "f00f2e97-93de-4e24-9495-e693bc6dee0a" (UID: "f00f2e97-93de-4e24-9495-e693bc6dee0a"). InnerVolumeSpecName "kube-api-access-b2g7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.928462 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data" (OuterVolumeSpecName: "config-data") pod "f00f2e97-93de-4e24-9495-e693bc6dee0a" (UID: "f00f2e97-93de-4e24-9495-e693bc6dee0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:24 crc kubenswrapper[4768]: I1124 17:10:24.947673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f00f2e97-93de-4e24-9495-e693bc6dee0a" (UID: "f00f2e97-93de-4e24-9495-e693bc6dee0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.001540 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.001589 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00f2e97-93de-4e24-9495-e693bc6dee0a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.001602 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2g7m\" (UniqueName: \"kubernetes.io/projected/f00f2e97-93de-4e24-9495-e693bc6dee0a-kube-api-access-b2g7m\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.001616 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00f2e97-93de-4e24-9495-e693bc6dee0a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.251100 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerStarted","Data":"fa5743b3d151ab345867bca12752ae4c8697a767c2ae73b04e69a4f2df6a6e7a"} Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.252949 4768 generic.go:334] "Generic (PLEG): container finished" podID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerID="a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" exitCode=0 Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.252976 4768 generic.go:334] "Generic (PLEG): container finished" podID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerID="f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" exitCode=143 Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.253184 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerDied","Data":"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69"} Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.253261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerDied","Data":"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce"} Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.253278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f00f2e97-93de-4e24-9495-e693bc6dee0a","Type":"ContainerDied","Data":"0e05bd484787c411c84a0d12adcb89f8d84493717f83e1490e48b7da473df5b9"} Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.253303 4768 scope.go:117] "RemoveContainer" containerID="a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.253537 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.279895 4768 scope.go:117] "RemoveContainer" containerID="f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.315959 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.322607 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.335218 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:25 crc kubenswrapper[4768]: E1124 17:10:25.336147 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-metadata" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.336167 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-metadata" Nov 24 17:10:25 crc kubenswrapper[4768]: E1124 17:10:25.336188 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-log" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.336196 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-log" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.336713 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-log" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.336735 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" containerName="nova-metadata-metadata" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.346759 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.346891 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.353841 4768 scope.go:117] "RemoveContainer" containerID="a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.354331 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.354969 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 17:10:25 crc kubenswrapper[4768]: E1124 17:10:25.355096 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69\": container with ID starting with a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69 not found: ID does not exist" containerID="a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.355123 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69"} err="failed to get container status \"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69\": rpc error: code = NotFound desc = could not find container \"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69\": container with ID starting with a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69 not found: ID does not exist" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.355140 4768 scope.go:117] "RemoveContainer" containerID="f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" Nov 24 17:10:25 crc kubenswrapper[4768]: E1124 17:10:25.365669 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce\": container with ID starting with f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce not found: ID does not exist" containerID="f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.365715 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce"} err="failed to get container status \"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce\": rpc error: code = NotFound desc = could not find container \"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce\": container with ID starting with f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce not found: ID does not exist" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.365748 4768 scope.go:117] "RemoveContainer" containerID="a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.368960 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69"} err="failed to get container status \"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69\": rpc error: code = NotFound desc = could not find container \"a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69\": container with ID starting with a39eb3238f89b6f4472eec1e9650e66c2b9180b04f46cf6306e806b7436e5d69 not found: ID does not exist" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.369002 4768 scope.go:117] "RemoveContainer" containerID="f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.369304 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce"} err="failed to get container status \"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce\": rpc error: code = NotFound desc = could not find container \"f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce\": container with ID starting with f6be5c5d71e2a5d6dbea667d11591b8ae129f58bd2ab9f253bba2c27aefffdce not found: ID does not exist" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.413655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.413923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.414193 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99b9p\" (UniqueName: \"kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.414394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.414482 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.516604 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.516668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.516709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99b9p\" (UniqueName: \"kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.516756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.516786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.517390 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.521187 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.521409 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.522814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.533804 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99b9p\" (UniqueName: \"kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p\") pod \"nova-metadata-0\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " pod="openstack/nova-metadata-0" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.593237 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00f2e97-93de-4e24-9495-e693bc6dee0a" path="/var/lib/kubelet/pods/f00f2e97-93de-4e24-9495-e693bc6dee0a/volumes" Nov 24 17:10:25 crc kubenswrapper[4768]: I1124 17:10:25.681953 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:26 crc kubenswrapper[4768]: I1124 17:10:26.148560 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:26 crc kubenswrapper[4768]: I1124 17:10:26.265506 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerStarted","Data":"2c0393c60f6c6e345e507b06914748802f6beb93b304c73ac45ef137a34603c9"} Nov 24 17:10:26 crc kubenswrapper[4768]: I1124 17:10:26.267020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerStarted","Data":"801adc3560fc59757f8e5267028bca9275a811cd888099b04e60b3ed4da09040"} Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.278850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerStarted","Data":"d9822d343d0da2f2d89fa6b248509c39be1e2a76c89f5a24fe59fb61d79c1a71"} Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.279185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerStarted","Data":"2efe5996240285397797305ade59091af0c1fbb1e743a6810fc8f0abf88d6888"} Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.282837 4768 generic.go:334] "Generic (PLEG): container finished" podID="8f88274f-db1f-4ab0-88bf-12a230c0c5e6" containerID="dfd7d81cd6a5b8d2d1ae543a561e8e296c666380d0962e47e63eeb5721135f26" exitCode=0 Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.282891 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdb22" event={"ID":"8f88274f-db1f-4ab0-88bf-12a230c0c5e6","Type":"ContainerDied","Data":"dfd7d81cd6a5b8d2d1ae543a561e8e296c666380d0962e47e63eeb5721135f26"} Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.287628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerStarted","Data":"72d42fc107c69087593f79543b8627ece8db95e6255248722c00b7dc89190c2f"} Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.288424 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.300683 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.300669363 podStartE2EDuration="2.300669363s" podCreationTimestamp="2025-11-24 17:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:27.299822319 +0000 UTC m=+1108.546790997" watchObservedRunningTime="2025-11-24 17:10:27.300669363 +0000 UTC m=+1108.547638021" Nov 24 17:10:27 crc kubenswrapper[4768]: I1124 17:10:27.342159 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.0621382 podStartE2EDuration="7.342137305s" podCreationTimestamp="2025-11-24 17:10:20 +0000 UTC" firstStartedPulling="2025-11-24 17:10:23.154953829 +0000 UTC m=+1104.401922487" lastFinishedPulling="2025-11-24 17:10:26.434952934 +0000 UTC m=+1107.681921592" observedRunningTime="2025-11-24 17:10:27.332618976 +0000 UTC m=+1108.579587654" watchObservedRunningTime="2025-11-24 17:10:27.342137305 +0000 UTC m=+1108.589105973" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.297763 4768 generic.go:334] "Generic (PLEG): container finished" podID="bfd9e2e0-6b33-444a-a253-1d4e75a13681" containerID="b103acf9ad0f3d26f8f85be888daf5cac6a3f66a63e1328fae01d08553aa855d" exitCode=0 Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.299796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" event={"ID":"bfd9e2e0-6b33-444a-a253-1d4e75a13681","Type":"ContainerDied","Data":"b103acf9ad0f3d26f8f85be888daf5cac6a3f66a63e1328fae01d08553aa855d"} Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.569518 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.608641 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.705291 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.758763 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.758828 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.778526 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts\") pod \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.778600 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrz6g\" (UniqueName: \"kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g\") pod \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.778642 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data\") pod \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.778680 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle\") pod \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\" (UID: \"8f88274f-db1f-4ab0-88bf-12a230c0c5e6\") " Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.785435 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts" (OuterVolumeSpecName: "scripts") pod "8f88274f-db1f-4ab0-88bf-12a230c0c5e6" (UID: "8f88274f-db1f-4ab0-88bf-12a230c0c5e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.787544 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g" (OuterVolumeSpecName: "kube-api-access-wrz6g") pod "8f88274f-db1f-4ab0-88bf-12a230c0c5e6" (UID: "8f88274f-db1f-4ab0-88bf-12a230c0c5e6"). InnerVolumeSpecName "kube-api-access-wrz6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.808125 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data" (OuterVolumeSpecName: "config-data") pod "8f88274f-db1f-4ab0-88bf-12a230c0c5e6" (UID: "8f88274f-db1f-4ab0-88bf-12a230c0c5e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.817685 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f88274f-db1f-4ab0-88bf-12a230c0c5e6" (UID: "8f88274f-db1f-4ab0-88bf-12a230c0c5e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.859115 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.881856 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.881887 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.881919 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrz6g\" (UniqueName: \"kubernetes.io/projected/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-kube-api-access-wrz6g\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.881933 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f88274f-db1f-4ab0-88bf-12a230c0c5e6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.928109 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:10:28 crc kubenswrapper[4768]: I1124 17:10:28.928394 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="dnsmasq-dns" containerID="cri-o://0703ce620a53752d9ab07623a3d432daf6170a075729f7ed2040c1d914fe4d4c" gracePeriod=10 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.316416 4768 generic.go:334] "Generic (PLEG): container finished" podID="da75d66a-010d-483d-b623-70707cc9af95" containerID="0703ce620a53752d9ab07623a3d432daf6170a075729f7ed2040c1d914fe4d4c" exitCode=0 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.316652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" event={"ID":"da75d66a-010d-483d-b623-70707cc9af95","Type":"ContainerDied","Data":"0703ce620a53752d9ab07623a3d432daf6170a075729f7ed2040c1d914fe4d4c"} Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.319252 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qdb22" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.324597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qdb22" event={"ID":"8f88274f-db1f-4ab0-88bf-12a230c0c5e6","Type":"ContainerDied","Data":"5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b"} Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.324642 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5492321fc54f4b27ecbd3a2f7ad54176042b26dd7e2a054475c2d6a2d8dfcd5b" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.373159 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.395394 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490508 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490591 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwbb7\" (UniqueName: \"kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490646 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490752 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.490778 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config\") pod \"da75d66a-010d-483d-b623-70707cc9af95\" (UID: \"da75d66a-010d-483d-b623-70707cc9af95\") " Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.511550 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7" (OuterVolumeSpecName: "kube-api-access-cwbb7") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "kube-api-access-cwbb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.565486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.594819 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwbb7\" (UniqueName: \"kubernetes.io/projected/da75d66a-010d-483d-b623-70707cc9af95-kube-api-access-cwbb7\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.595252 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.606416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.641003 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config" (OuterVolumeSpecName: "config") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.662466 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.662720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da75d66a-010d-483d-b623-70707cc9af95" (UID: "da75d66a-010d-483d-b623-70707cc9af95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.697479 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.697512 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.697523 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.697531 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da75d66a-010d-483d-b623-70707cc9af95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.707161 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.707224 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.707408 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-log" containerID="cri-o://2efe5996240285397797305ade59091af0c1fbb1e743a6810fc8f0abf88d6888" gracePeriod=30 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.707964 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-log" containerID="cri-o://61bb31569e5e1aa10b53da4474c773b324742025d087b7beb5e91aba6a7f89d7" gracePeriod=30 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.708087 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-api" containerID="cri-o://4998736fdf8f793a1a6c23e29d4b8530bde6b8bfd091b51f90f9b198156885ac" gracePeriod=30 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.708125 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-metadata" containerID="cri-o://d9822d343d0da2f2d89fa6b248509c39be1e2a76c89f5a24fe59fb61d79c1a71" gracePeriod=30 Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.717864 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": EOF" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.717987 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": EOF" Nov 24 17:10:29 crc kubenswrapper[4768]: I1124 17:10:29.883284 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.001341 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.011400 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data\") pod \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.011606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts\") pod \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.011725 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle\") pod \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.011877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwvht\" (UniqueName: \"kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht\") pod \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\" (UID: \"bfd9e2e0-6b33-444a-a253-1d4e75a13681\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.015852 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts" (OuterVolumeSpecName: "scripts") pod "bfd9e2e0-6b33-444a-a253-1d4e75a13681" (UID: "bfd9e2e0-6b33-444a-a253-1d4e75a13681"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.021461 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht" (OuterVolumeSpecName: "kube-api-access-nwvht") pod "bfd9e2e0-6b33-444a-a253-1d4e75a13681" (UID: "bfd9e2e0-6b33-444a-a253-1d4e75a13681"). InnerVolumeSpecName "kube-api-access-nwvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.037313 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data" (OuterVolumeSpecName: "config-data") pod "bfd9e2e0-6b33-444a-a253-1d4e75a13681" (UID: "bfd9e2e0-6b33-444a-a253-1d4e75a13681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.037893 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfd9e2e0-6b33-444a-a253-1d4e75a13681" (UID: "bfd9e2e0-6b33-444a-a253-1d4e75a13681"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.113584 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.113610 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwvht\" (UniqueName: \"kubernetes.io/projected/bfd9e2e0-6b33-444a-a253-1d4e75a13681-kube-api-access-nwvht\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.113620 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.113629 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd9e2e0-6b33-444a-a253-1d4e75a13681-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.329643 4768 generic.go:334] "Generic (PLEG): container finished" podID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerID="61bb31569e5e1aa10b53da4474c773b324742025d087b7beb5e91aba6a7f89d7" exitCode=143 Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.329696 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerDied","Data":"61bb31569e5e1aa10b53da4474c773b324742025d087b7beb5e91aba6a7f89d7"} Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.331906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" event={"ID":"da75d66a-010d-483d-b623-70707cc9af95","Type":"ContainerDied","Data":"eb244a2764d6e21aad2ae2fd0b64882429866702b472238dd426ec312b9f8fcf"} Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.331938 4768 scope.go:117] "RemoveContainer" containerID="0703ce620a53752d9ab07623a3d432daf6170a075729f7ed2040c1d914fe4d4c" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.332094 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-fhwr9" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.338945 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.338951 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qhvz6" event={"ID":"bfd9e2e0-6b33-444a-a253-1d4e75a13681","Type":"ContainerDied","Data":"b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7"} Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.338989 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b02dc1a2f6ca630e07e86e5f21c699f2cfd95b920523aa3f5dcd4b84184585a7" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.347030 4768 generic.go:334] "Generic (PLEG): container finished" podID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerID="d9822d343d0da2f2d89fa6b248509c39be1e2a76c89f5a24fe59fb61d79c1a71" exitCode=0 Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.347058 4768 generic.go:334] "Generic (PLEG): container finished" podID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerID="2efe5996240285397797305ade59091af0c1fbb1e743a6810fc8f0abf88d6888" exitCode=143 Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.347113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerDied","Data":"d9822d343d0da2f2d89fa6b248509c39be1e2a76c89f5a24fe59fb61d79c1a71"} Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.347155 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerDied","Data":"2efe5996240285397797305ade59091af0c1fbb1e743a6810fc8f0abf88d6888"} Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.393050 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.393909 4768 scope.go:117] "RemoveContainer" containerID="9fbf61905924dd8bbb117d59e6882d8b6624fedca2da0cf2f9f0bda16603451c" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.402632 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-fhwr9"] Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.412206 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 17:10:30 crc kubenswrapper[4768]: E1124 17:10:30.413636 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd9e2e0-6b33-444a-a253-1d4e75a13681" containerName="nova-cell1-conductor-db-sync" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413678 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd9e2e0-6b33-444a-a253-1d4e75a13681" containerName="nova-cell1-conductor-db-sync" Nov 24 17:10:30 crc kubenswrapper[4768]: E1124 17:10:30.413706 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="dnsmasq-dns" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413713 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="dnsmasq-dns" Nov 24 17:10:30 crc kubenswrapper[4768]: E1124 17:10:30.413732 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="init" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413739 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="init" Nov 24 17:10:30 crc kubenswrapper[4768]: E1124 17:10:30.413762 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f88274f-db1f-4ab0-88bf-12a230c0c5e6" containerName="nova-manage" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413768 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f88274f-db1f-4ab0-88bf-12a230c0c5e6" containerName="nova-manage" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413932 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f88274f-db1f-4ab0-88bf-12a230c0c5e6" containerName="nova-manage" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413942 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd9e2e0-6b33-444a-a253-1d4e75a13681" containerName="nova-cell1-conductor-db-sync" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.413960 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="da75d66a-010d-483d-b623-70707cc9af95" containerName="dnsmasq-dns" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.414589 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.417964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.428878 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.522267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.522310 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.522437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqgpg\" (UniqueName: \"kubernetes.io/projected/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-kube-api-access-hqgpg\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.625379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.625426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.625492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqgpg\" (UniqueName: \"kubernetes.io/projected/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-kube-api-access-hqgpg\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.630371 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.644422 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqgpg\" (UniqueName: \"kubernetes.io/projected/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-kube-api-access-hqgpg\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.649229 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51\") " pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.683801 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.683850 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.731458 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.891552 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.931187 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs\") pod \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.931439 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data\") pod \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.933757 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs\") pod \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.934053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs" (OuterVolumeSpecName: "logs") pod "fbb41e74-ba79-42c6-ae70-8d86c8c26eff" (UID: "fbb41e74-ba79-42c6-ae70-8d86c8c26eff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.934152 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle\") pod \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.934182 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99b9p\" (UniqueName: \"kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p\") pod \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\" (UID: \"fbb41e74-ba79-42c6-ae70-8d86c8c26eff\") " Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.935078 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.939121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p" (OuterVolumeSpecName: "kube-api-access-99b9p") pod "fbb41e74-ba79-42c6-ae70-8d86c8c26eff" (UID: "fbb41e74-ba79-42c6-ae70-8d86c8c26eff"). InnerVolumeSpecName "kube-api-access-99b9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.969338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbb41e74-ba79-42c6-ae70-8d86c8c26eff" (UID: "fbb41e74-ba79-42c6-ae70-8d86c8c26eff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.971491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data" (OuterVolumeSpecName: "config-data") pod "fbb41e74-ba79-42c6-ae70-8d86c8c26eff" (UID: "fbb41e74-ba79-42c6-ae70-8d86c8c26eff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:30 crc kubenswrapper[4768]: I1124 17:10:30.993673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fbb41e74-ba79-42c6-ae70-8d86c8c26eff" (UID: "fbb41e74-ba79-42c6-ae70-8d86c8c26eff"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.036497 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.036787 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.036797 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.036805 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99b9p\" (UniqueName: \"kubernetes.io/projected/fbb41e74-ba79-42c6-ae70-8d86c8c26eff-kube-api-access-99b9p\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.255470 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 17:10:31 crc kubenswrapper[4768]: W1124 17:10:31.268369 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2ce7d34_ee24_4fc3_8cad_1ca78c6e1d51.slice/crio-763a09a46894dcd5eb90f1c25cb441d57ef13a9c3df27efa3dd4078dc6ab9ff1 WatchSource:0}: Error finding container 763a09a46894dcd5eb90f1c25cb441d57ef13a9c3df27efa3dd4078dc6ab9ff1: Status 404 returned error can't find the container with id 763a09a46894dcd5eb90f1c25cb441d57ef13a9c3df27efa3dd4078dc6ab9ff1 Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.364191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbb41e74-ba79-42c6-ae70-8d86c8c26eff","Type":"ContainerDied","Data":"2c0393c60f6c6e345e507b06914748802f6beb93b304c73ac45ef137a34603c9"} Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.364236 4768 scope.go:117] "RemoveContainer" containerID="d9822d343d0da2f2d89fa6b248509c39be1e2a76c89f5a24fe59fb61d79c1a71" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.364240 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.393317 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51","Type":"ContainerStarted","Data":"763a09a46894dcd5eb90f1c25cb441d57ef13a9c3df27efa3dd4078dc6ab9ff1"} Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.393337 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" containerName="nova-scheduler-scheduler" containerID="cri-o://c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" gracePeriod=30 Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.425814 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.439324 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.451595 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:31 crc kubenswrapper[4768]: E1124 17:10:31.451997 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-log" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.452189 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-log" Nov 24 17:10:31 crc kubenswrapper[4768]: E1124 17:10:31.452213 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-metadata" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.452230 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-metadata" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.452430 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-metadata" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.452456 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" containerName="nova-metadata-log" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.453411 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.456522 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.456528 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.461464 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.461624 4768 scope.go:117] "RemoveContainer" containerID="2efe5996240285397797305ade59091af0c1fbb1e743a6810fc8f0abf88d6888" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.550228 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.550571 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7ch\" (UniqueName: \"kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.550596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.550624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.550641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.590307 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da75d66a-010d-483d-b623-70707cc9af95" path="/var/lib/kubelet/pods/da75d66a-010d-483d-b623-70707cc9af95/volumes" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.590920 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb41e74-ba79-42c6-ae70-8d86c8c26eff" path="/var/lib/kubelet/pods/fbb41e74-ba79-42c6-ae70-8d86c8c26eff/volumes" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.652321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.653082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx7ch\" (UniqueName: \"kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.653126 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.653156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.653178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.654089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.657665 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.657821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.658156 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.672562 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx7ch\" (UniqueName: \"kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch\") pod \"nova-metadata-0\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " pod="openstack/nova-metadata-0" Nov 24 17:10:31 crc kubenswrapper[4768]: I1124 17:10:31.776050 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:10:32 crc kubenswrapper[4768]: I1124 17:10:32.392134 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:10:32 crc kubenswrapper[4768]: I1124 17:10:32.411411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51","Type":"ContainerStarted","Data":"a3cd0fa361ca0dbf065b45dbced8c16f5b89dabaea8b3a5a96d538dc88f873bc"} Nov 24 17:10:32 crc kubenswrapper[4768]: I1124 17:10:32.411574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:32 crc kubenswrapper[4768]: I1124 17:10:32.438858 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.438843431 podStartE2EDuration="2.438843431s" podCreationTimestamp="2025-11-24 17:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:32.43562866 +0000 UTC m=+1113.682597328" watchObservedRunningTime="2025-11-24 17:10:32.438843431 +0000 UTC m=+1113.685812089" Nov 24 17:10:33 crc kubenswrapper[4768]: I1124 17:10:33.429843 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerStarted","Data":"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889"} Nov 24 17:10:33 crc kubenswrapper[4768]: I1124 17:10:33.430386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerStarted","Data":"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b"} Nov 24 17:10:33 crc kubenswrapper[4768]: I1124 17:10:33.430398 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerStarted","Data":"48616866c680e4a3304e67536d26bd39b64fd4a19b89af18accb8c0b7fcb7638"} Nov 24 17:10:33 crc kubenswrapper[4768]: I1124 17:10:33.453767 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.453745278 podStartE2EDuration="2.453745278s" podCreationTimestamp="2025-11-24 17:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:33.447858412 +0000 UTC m=+1114.694827070" watchObservedRunningTime="2025-11-24 17:10:33.453745278 +0000 UTC m=+1114.700713936" Nov 24 17:10:33 crc kubenswrapper[4768]: E1124 17:10:33.571265 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 17:10:33 crc kubenswrapper[4768]: E1124 17:10:33.573083 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 17:10:33 crc kubenswrapper[4768]: E1124 17:10:33.574516 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 17:10:33 crc kubenswrapper[4768]: E1124 17:10:33.574544 4768 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" containerName="nova-scheduler-scheduler" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.439336 4768 generic.go:334] "Generic (PLEG): container finished" podID="f0d523be-42d4-491c-9e07-3b76db03250c" containerID="c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" exitCode=0 Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.439378 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0d523be-42d4-491c-9e07-3b76db03250c","Type":"ContainerDied","Data":"c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73"} Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.691082 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.818679 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle\") pod \"f0d523be-42d4-491c-9e07-3b76db03250c\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.818830 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data\") pod \"f0d523be-42d4-491c-9e07-3b76db03250c\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.819035 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f44hl\" (UniqueName: \"kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl\") pod \"f0d523be-42d4-491c-9e07-3b76db03250c\" (UID: \"f0d523be-42d4-491c-9e07-3b76db03250c\") " Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.825644 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl" (OuterVolumeSpecName: "kube-api-access-f44hl") pod "f0d523be-42d4-491c-9e07-3b76db03250c" (UID: "f0d523be-42d4-491c-9e07-3b76db03250c"). InnerVolumeSpecName "kube-api-access-f44hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.846235 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0d523be-42d4-491c-9e07-3b76db03250c" (UID: "f0d523be-42d4-491c-9e07-3b76db03250c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.855035 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data" (OuterVolumeSpecName: "config-data") pod "f0d523be-42d4-491c-9e07-3b76db03250c" (UID: "f0d523be-42d4-491c-9e07-3b76db03250c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.921439 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.921476 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f44hl\" (UniqueName: \"kubernetes.io/projected/f0d523be-42d4-491c-9e07-3b76db03250c-kube-api-access-f44hl\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:34 crc kubenswrapper[4768]: I1124 17:10:34.921491 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d523be-42d4-491c-9e07-3b76db03250c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.452731 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f0d523be-42d4-491c-9e07-3b76db03250c","Type":"ContainerDied","Data":"f6cfd4ecc628cef5ded45d4bc480fd2211cf693a651f8f8b0432347778a52c16"} Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.453238 4768 scope.go:117] "RemoveContainer" containerID="c11a488554c2134b02559a3926f2c3e3d7e3f717e0a92bbdca7cae7bfe3dbe73" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.452775 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.502025 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.515419 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.525168 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:35 crc kubenswrapper[4768]: E1124 17:10:35.525631 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" containerName="nova-scheduler-scheduler" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.525654 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" containerName="nova-scheduler-scheduler" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.525836 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" containerName="nova-scheduler-scheduler" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.526592 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.528754 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.535588 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.619782 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d523be-42d4-491c-9e07-3b76db03250c" path="/var/lib/kubelet/pods/f0d523be-42d4-491c-9e07-3b76db03250c/volumes" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.633036 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.635189 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9fh\" (UniqueName: \"kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.635289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.736936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9fh\" (UniqueName: \"kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.736990 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.737067 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.743301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.743855 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.759994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9fh\" (UniqueName: \"kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh\") pod \"nova-scheduler-0\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " pod="openstack/nova-scheduler-0" Nov 24 17:10:35 crc kubenswrapper[4768]: I1124 17:10:35.886777 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.314396 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:10:36 crc kubenswrapper[4768]: W1124 17:10:36.314829 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a540e3c_2ee6_4c19_955a_40e614b40dce.slice/crio-1900d90a7359c9b3bb354a7128132df3f90dffa3012b3d6cc4ef2f3010d706ac WatchSource:0}: Error finding container 1900d90a7359c9b3bb354a7128132df3f90dffa3012b3d6cc4ef2f3010d706ac: Status 404 returned error can't find the container with id 1900d90a7359c9b3bb354a7128132df3f90dffa3012b3d6cc4ef2f3010d706ac Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.464037 4768 generic.go:334] "Generic (PLEG): container finished" podID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerID="4998736fdf8f793a1a6c23e29d4b8530bde6b8bfd091b51f90f9b198156885ac" exitCode=0 Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.464199 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerDied","Data":"4998736fdf8f793a1a6c23e29d4b8530bde6b8bfd091b51f90f9b198156885ac"} Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.465704 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a540e3c-2ee6-4c19-955a-40e614b40dce","Type":"ContainerStarted","Data":"1900d90a7359c9b3bb354a7128132df3f90dffa3012b3d6cc4ef2f3010d706ac"} Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.534333 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.660631 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data\") pod \"70371e42-7f78-41cf-a2f7-c1322e103ca3\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.660699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs\") pod \"70371e42-7f78-41cf-a2f7-c1322e103ca3\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.660858 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle\") pod \"70371e42-7f78-41cf-a2f7-c1322e103ca3\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.661041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbdg7\" (UniqueName: \"kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7\") pod \"70371e42-7f78-41cf-a2f7-c1322e103ca3\" (UID: \"70371e42-7f78-41cf-a2f7-c1322e103ca3\") " Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.661572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs" (OuterVolumeSpecName: "logs") pod "70371e42-7f78-41cf-a2f7-c1322e103ca3" (UID: "70371e42-7f78-41cf-a2f7-c1322e103ca3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.665571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7" (OuterVolumeSpecName: "kube-api-access-wbdg7") pod "70371e42-7f78-41cf-a2f7-c1322e103ca3" (UID: "70371e42-7f78-41cf-a2f7-c1322e103ca3"). InnerVolumeSpecName "kube-api-access-wbdg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.686715 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data" (OuterVolumeSpecName: "config-data") pod "70371e42-7f78-41cf-a2f7-c1322e103ca3" (UID: "70371e42-7f78-41cf-a2f7-c1322e103ca3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.688663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70371e42-7f78-41cf-a2f7-c1322e103ca3" (UID: "70371e42-7f78-41cf-a2f7-c1322e103ca3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.763504 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbdg7\" (UniqueName: \"kubernetes.io/projected/70371e42-7f78-41cf-a2f7-c1322e103ca3-kube-api-access-wbdg7\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.763789 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.763798 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70371e42-7f78-41cf-a2f7-c1322e103ca3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.763807 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70371e42-7f78-41cf-a2f7-c1322e103ca3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.776662 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:10:36 crc kubenswrapper[4768]: I1124 17:10:36.776734 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.481037 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a540e3c-2ee6-4c19-955a-40e614b40dce","Type":"ContainerStarted","Data":"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a"} Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.486847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70371e42-7f78-41cf-a2f7-c1322e103ca3","Type":"ContainerDied","Data":"b98174682361bc0698c6125af0ce7cfe82d4a3ba1e88caede81d15f08bfb496b"} Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.486914 4768 scope.go:117] "RemoveContainer" containerID="4998736fdf8f793a1a6c23e29d4b8530bde6b8bfd091b51f90f9b198156885ac" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.487126 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.515689 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5156656330000002 podStartE2EDuration="2.515665633s" podCreationTimestamp="2025-11-24 17:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:37.506890855 +0000 UTC m=+1118.753859553" watchObservedRunningTime="2025-11-24 17:10:37.515665633 +0000 UTC m=+1118.762634301" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.526871 4768 scope.go:117] "RemoveContainer" containerID="61bb31569e5e1aa10b53da4474c773b324742025d087b7beb5e91aba6a7f89d7" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.548908 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.563442 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.571995 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:37 crc kubenswrapper[4768]: E1124 17:10:37.572592 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-log" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.572614 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-log" Nov 24 17:10:37 crc kubenswrapper[4768]: E1124 17:10:37.572634 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-api" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.572644 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-api" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.572913 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-api" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.572934 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" containerName="nova-api-log" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.574088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.587254 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.622785 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70371e42-7f78-41cf-a2f7-c1322e103ca3" path="/var/lib/kubelet/pods/70371e42-7f78-41cf-a2f7-c1322e103ca3/volumes" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.623650 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.696058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvg5\" (UniqueName: \"kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.696134 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.696245 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.696334 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.798003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.798173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.798273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.798327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrvg5\" (UniqueName: \"kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.799429 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.804044 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.812017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.815710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrvg5\" (UniqueName: \"kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5\") pod \"nova-api-0\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " pod="openstack/nova-api-0" Nov 24 17:10:37 crc kubenswrapper[4768]: I1124 17:10:37.911186 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:10:38 crc kubenswrapper[4768]: W1124 17:10:38.362315 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd16daa8a_8657_4552_802a_ca7e557f1d4f.slice/crio-5a1093d24bdc757e1865d097ca5bf6b58da7c32fac275dba042cdae5a450bcee WatchSource:0}: Error finding container 5a1093d24bdc757e1865d097ca5bf6b58da7c32fac275dba042cdae5a450bcee: Status 404 returned error can't find the container with id 5a1093d24bdc757e1865d097ca5bf6b58da7c32fac275dba042cdae5a450bcee Nov 24 17:10:38 crc kubenswrapper[4768]: I1124 17:10:38.363863 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:10:38 crc kubenswrapper[4768]: I1124 17:10:38.499116 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerStarted","Data":"5a1093d24bdc757e1865d097ca5bf6b58da7c32fac275dba042cdae5a450bcee"} Nov 24 17:10:39 crc kubenswrapper[4768]: I1124 17:10:39.508394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerStarted","Data":"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0"} Nov 24 17:10:39 crc kubenswrapper[4768]: I1124 17:10:39.508791 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerStarted","Data":"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e"} Nov 24 17:10:39 crc kubenswrapper[4768]: I1124 17:10:39.539455 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.539437617 podStartE2EDuration="2.539437617s" podCreationTimestamp="2025-11-24 17:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:39.523640031 +0000 UTC m=+1120.770608719" watchObservedRunningTime="2025-11-24 17:10:39.539437617 +0000 UTC m=+1120.786406275" Nov 24 17:10:40 crc kubenswrapper[4768]: I1124 17:10:40.767214 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 17:10:40 crc kubenswrapper[4768]: I1124 17:10:40.887863 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 17:10:41 crc kubenswrapper[4768]: I1124 17:10:41.776630 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 17:10:41 crc kubenswrapper[4768]: I1124 17:10:41.776761 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 17:10:42 crc kubenswrapper[4768]: I1124 17:10:42.792616 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:10:42 crc kubenswrapper[4768]: I1124 17:10:42.792600 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:10:45 crc kubenswrapper[4768]: I1124 17:10:45.887104 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 17:10:45 crc kubenswrapper[4768]: I1124 17:10:45.920554 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 17:10:46 crc kubenswrapper[4768]: I1124 17:10:46.639843 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 17:10:47 crc kubenswrapper[4768]: I1124 17:10:47.912501 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:10:47 crc kubenswrapper[4768]: I1124 17:10:47.912810 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:10:48 crc kubenswrapper[4768]: I1124 17:10:48.994786 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 17:10:48 crc kubenswrapper[4768]: I1124 17:10:48.994794 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 17:10:50 crc kubenswrapper[4768]: I1124 17:10:50.539625 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 17:10:50 crc kubenswrapper[4768]: I1124 17:10:50.663269 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd" containerID="47d90a3736efa4b09c295da65b5e9ca41c0c7bf43f930e89b8fb8fbd04eb9489" exitCode=0 Nov 24 17:10:50 crc kubenswrapper[4768]: I1124 17:10:50.663331 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerDied","Data":"47d90a3736efa4b09c295da65b5e9ca41c0c7bf43f930e89b8fb8fbd04eb9489"} Nov 24 17:10:51 crc kubenswrapper[4768]: I1124 17:10:51.674500 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"88ded570a09989747efa72dc355c7d22a9dd93760fc677067a5cafcc0e227776"} Nov 24 17:10:51 crc kubenswrapper[4768]: I1124 17:10:51.674946 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"7aa687e89a826381599c0304b1f61716ef3fe090bc883f1034478edda96e0349"} Nov 24 17:10:51 crc kubenswrapper[4768]: I1124 17:10:51.783263 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 17:10:51 crc kubenswrapper[4768]: I1124 17:10:51.783984 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 17:10:51 crc kubenswrapper[4768]: I1124 17:10:51.792264 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 17:10:52 crc kubenswrapper[4768]: I1124 17:10:52.688003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd","Type":"ContainerStarted","Data":"c41f49c18dc53840857f4262a556163d97778245a9b87681988f65d75564bbde"} Nov 24 17:10:52 crc kubenswrapper[4768]: I1124 17:10:52.688514 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Nov 24 17:10:52 crc kubenswrapper[4768]: I1124 17:10:52.688708 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Nov 24 17:10:52 crc kubenswrapper[4768]: I1124 17:10:52.703692 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 17:10:52 crc kubenswrapper[4768]: I1124 17:10:52.722958 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=75.088883029 podStartE2EDuration="2m2.722943125s" podCreationTimestamp="2025-11-24 17:08:50 +0000 UTC" firstStartedPulling="2025-11-24 17:08:54.177375627 +0000 UTC m=+1015.424344275" lastFinishedPulling="2025-11-24 17:09:41.811435713 +0000 UTC m=+1063.058404371" observedRunningTime="2025-11-24 17:10:52.718790348 +0000 UTC m=+1133.965759006" watchObservedRunningTime="2025-11-24 17:10:52.722943125 +0000 UTC m=+1133.969911783" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.656043 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.709643 4768 generic.go:334] "Generic (PLEG): container finished" podID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" containerID="75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc" exitCode=137 Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.709696 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.709729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec","Type":"ContainerDied","Data":"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc"} Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.709812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec","Type":"ContainerDied","Data":"c0b81a32c8f5bd1645f5c66b1a0825ba7e8dc541984be6e191b65896d4789049"} Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.709850 4768 scope.go:117] "RemoveContainer" containerID="75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.718190 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fn4k\" (UniqueName: \"kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k\") pod \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.718306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data\") pod \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.720169 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle\") pod \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\" (UID: \"319b2ddf-3b71-41c9-8fd8-7830b21ba3ec\") " Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.753073 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k" (OuterVolumeSpecName: "kube-api-access-5fn4k") pod "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" (UID: "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec"). InnerVolumeSpecName "kube-api-access-5fn4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.812405 4768 scope.go:117] "RemoveContainer" containerID="75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.813980 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data" (OuterVolumeSpecName: "config-data") pod "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" (UID: "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.824696 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fn4k\" (UniqueName: \"kubernetes.io/projected/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-kube-api-access-5fn4k\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.824727 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:53 crc kubenswrapper[4768]: E1124 17:10:53.825304 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc\": container with ID starting with 75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc not found: ID does not exist" containerID="75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.825333 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc"} err="failed to get container status \"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc\": rpc error: code = NotFound desc = could not find container \"75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc\": container with ID starting with 75ca58064731d97dfe67852f03b501219fe9adc5925e18e584331e214e1e04cc not found: ID does not exist" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.829983 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" (UID: "319b2ddf-3b71-41c9-8fd8-7830b21ba3ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:10:53 crc kubenswrapper[4768]: I1124 17:10:53.926017 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.048750 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.061277 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.070481 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:54 crc kubenswrapper[4768]: E1124 17:10:54.071011 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.071039 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.071327 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.072049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.073979 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.075990 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.078434 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.080884 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.130056 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.130103 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.130187 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.130207 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.130339 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnk7m\" (UniqueName: \"kubernetes.io/projected/0f182adb-6256-41d3-b7f0-bfa5e16965f7-kube-api-access-xnk7m\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.232706 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.232774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.232900 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.232929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.232960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnk7m\" (UniqueName: \"kubernetes.io/projected/0f182adb-6256-41d3-b7f0-bfa5e16965f7-kube-api-access-xnk7m\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.238118 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.238120 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.238428 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.239214 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f182adb-6256-41d3-b7f0-bfa5e16965f7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.247933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnk7m\" (UniqueName: \"kubernetes.io/projected/0f182adb-6256-41d3-b7f0-bfa5e16965f7-kube-api-access-xnk7m\") pod \"nova-cell1-novncproxy-0\" (UID: \"0f182adb-6256-41d3-b7f0-bfa5e16965f7\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.387179 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:54 crc kubenswrapper[4768]: I1124 17:10:54.876675 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 17:10:55 crc kubenswrapper[4768]: I1124 17:10:55.599439 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319b2ddf-3b71-41c9-8fd8-7830b21ba3ec" path="/var/lib/kubelet/pods/319b2ddf-3b71-41c9-8fd8-7830b21ba3ec/volumes" Nov 24 17:10:55 crc kubenswrapper[4768]: I1124 17:10:55.745891 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f182adb-6256-41d3-b7f0-bfa5e16965f7","Type":"ContainerStarted","Data":"6eaf2d6a0e778edd2cce6a8d2efccca368c3688951255bff1e7f16b6d3cb537e"} Nov 24 17:10:55 crc kubenswrapper[4768]: I1124 17:10:55.745958 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0f182adb-6256-41d3-b7f0-bfa5e16965f7","Type":"ContainerStarted","Data":"ea2eedfe0c13b796afc90121e4f41a53fae00356d44497bd2ec73e704c142ce9"} Nov 24 17:10:55 crc kubenswrapper[4768]: I1124 17:10:55.782616 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.78258856 podStartE2EDuration="1.78258856s" podCreationTimestamp="2025-11-24 17:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:10:55.76950832 +0000 UTC m=+1137.016477018" watchObservedRunningTime="2025-11-24 17:10:55.78258856 +0000 UTC m=+1137.029557228" Nov 24 17:10:57 crc kubenswrapper[4768]: I1124 17:10:57.932962 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 17:10:57 crc kubenswrapper[4768]: I1124 17:10:57.934245 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 17:10:57 crc kubenswrapper[4768]: I1124 17:10:57.948083 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 17:10:57 crc kubenswrapper[4768]: I1124 17:10:57.986251 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 17:10:58 crc kubenswrapper[4768]: I1124 17:10:58.792146 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 17:10:58 crc kubenswrapper[4768]: I1124 17:10:58.796836 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.021674 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xcjb4"] Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.037419 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xcjb4"] Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.037535 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.150994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-config\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.151074 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.151126 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.151809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.151949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.152214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr2z\" (UniqueName: \"kubernetes.io/projected/7aad7301-e116-40bb-9af0-f19afd1d17b4-kube-api-access-jpr2z\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254273 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254306 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254394 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr2z\" (UniqueName: \"kubernetes.io/projected/7aad7301-e116-40bb-9af0-f19afd1d17b4-kube-api-access-jpr2z\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.254469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-config\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.255680 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-config\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.255808 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.256430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.256508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.256692 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aad7301-e116-40bb-9af0-f19afd1d17b4-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.286638 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr2z\" (UniqueName: \"kubernetes.io/projected/7aad7301-e116-40bb-9af0-f19afd1d17b4-kube-api-access-jpr2z\") pod \"dnsmasq-dns-89c5cd4d5-xcjb4\" (UID: \"7aad7301-e116-40bb-9af0-f19afd1d17b4\") " pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.364766 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.387793 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:10:59 crc kubenswrapper[4768]: I1124 17:10:59.953142 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-xcjb4"] Nov 24 17:10:59 crc kubenswrapper[4768]: W1124 17:10:59.955898 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7aad7301_e116_40bb_9af0_f19afd1d17b4.slice/crio-ec77ea0fb07b08505d6802445e1db606438bf59a057062090a8bf42755e00bd0 WatchSource:0}: Error finding container ec77ea0fb07b08505d6802445e1db606438bf59a057062090a8bf42755e00bd0: Status 404 returned error can't find the container with id ec77ea0fb07b08505d6802445e1db606438bf59a057062090a8bf42755e00bd0 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.735127 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.736011 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-central-agent" containerID="cri-o://67808bd1850913efbd3bbeb884890ac15880e29fe5927306718df859a79d328d" gracePeriod=30 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.736524 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="proxy-httpd" containerID="cri-o://72d42fc107c69087593f79543b8627ece8db95e6255248722c00b7dc89190c2f" gracePeriod=30 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.738767 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-notification-agent" containerID="cri-o://fa5743b3d151ab345867bca12752ae4c8697a767c2ae73b04e69a4f2df6a6e7a" gracePeriod=30 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.738941 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="sg-core" containerID="cri-o://801adc3560fc59757f8e5267028bca9275a811cd888099b04e60b3ed4da09040" gracePeriod=30 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.809785 4768 generic.go:334] "Generic (PLEG): container finished" podID="7aad7301-e116-40bb-9af0-f19afd1d17b4" containerID="760dfa21506afa90bcb6378eb0853de407e77e51317cf60c29520f985ac75fee" exitCode=0 Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.811276 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" event={"ID":"7aad7301-e116-40bb-9af0-f19afd1d17b4","Type":"ContainerDied","Data":"760dfa21506afa90bcb6378eb0853de407e77e51317cf60c29520f985ac75fee"} Nov 24 17:11:00 crc kubenswrapper[4768]: I1124 17:11:00.811303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" event={"ID":"7aad7301-e116-40bb-9af0-f19afd1d17b4","Type":"ContainerStarted","Data":"ec77ea0fb07b08505d6802445e1db606438bf59a057062090a8bf42755e00bd0"} Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.340166 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.823257 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" event={"ID":"7aad7301-e116-40bb-9af0-f19afd1d17b4","Type":"ContainerStarted","Data":"024ddce366fa95d52bccd8c8cb7d8dbd0f7c1fb7feb2df65c67f38f4a4476c3e"} Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.824198 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827543 4768 generic.go:334] "Generic (PLEG): container finished" podID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerID="72d42fc107c69087593f79543b8627ece8db95e6255248722c00b7dc89190c2f" exitCode=0 Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827609 4768 generic.go:334] "Generic (PLEG): container finished" podID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerID="801adc3560fc59757f8e5267028bca9275a811cd888099b04e60b3ed4da09040" exitCode=2 Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827631 4768 generic.go:334] "Generic (PLEG): container finished" podID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerID="67808bd1850913efbd3bbeb884890ac15880e29fe5927306718df859a79d328d" exitCode=0 Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerDied","Data":"72d42fc107c69087593f79543b8627ece8db95e6255248722c00b7dc89190c2f"} Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerDied","Data":"801adc3560fc59757f8e5267028bca9275a811cd888099b04e60b3ed4da09040"} Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerDied","Data":"67808bd1850913efbd3bbeb884890ac15880e29fe5927306718df859a79d328d"} Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827981 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-log" containerID="cri-o://12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e" gracePeriod=30 Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.827992 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-api" containerID="cri-o://299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0" gracePeriod=30 Nov 24 17:11:01 crc kubenswrapper[4768]: I1124 17:11:01.856119 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" podStartSLOduration=3.856095416 podStartE2EDuration="3.856095416s" podCreationTimestamp="2025-11-24 17:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:01.844466717 +0000 UTC m=+1143.091435405" watchObservedRunningTime="2025-11-24 17:11:01.856095416 +0000 UTC m=+1143.103064074" Nov 24 17:11:02 crc kubenswrapper[4768]: I1124 17:11:02.841210 4768 generic.go:334] "Generic (PLEG): container finished" podID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerID="12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e" exitCode=143 Nov 24 17:11:02 crc kubenswrapper[4768]: I1124 17:11:02.841261 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerDied","Data":"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e"} Nov 24 17:11:04 crc kubenswrapper[4768]: I1124 17:11:04.388535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:11:04 crc kubenswrapper[4768]: I1124 17:11:04.411873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:11:04 crc kubenswrapper[4768]: I1124 17:11:04.880884 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.137645 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6hqpk"] Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.139600 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.142345 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.142717 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.156197 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6hqpk"] Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.277055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.277473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6d8x\" (UniqueName: \"kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.277520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.277567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.379380 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6d8x\" (UniqueName: \"kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.379424 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.379486 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.379602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.384703 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.385006 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.385986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.405225 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6d8x\" (UniqueName: \"kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x\") pod \"nova-cell1-cell-mapping-6hqpk\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.461828 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.469547 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.583160 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle\") pod \"d16daa8a-8657-4552-802a-ca7e557f1d4f\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.583590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrvg5\" (UniqueName: \"kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5\") pod \"d16daa8a-8657-4552-802a-ca7e557f1d4f\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.583750 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs\") pod \"d16daa8a-8657-4552-802a-ca7e557f1d4f\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.583809 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data\") pod \"d16daa8a-8657-4552-802a-ca7e557f1d4f\" (UID: \"d16daa8a-8657-4552-802a-ca7e557f1d4f\") " Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.584374 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs" (OuterVolumeSpecName: "logs") pod "d16daa8a-8657-4552-802a-ca7e557f1d4f" (UID: "d16daa8a-8657-4552-802a-ca7e557f1d4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.589874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5" (OuterVolumeSpecName: "kube-api-access-jrvg5") pod "d16daa8a-8657-4552-802a-ca7e557f1d4f" (UID: "d16daa8a-8657-4552-802a-ca7e557f1d4f"). InnerVolumeSpecName "kube-api-access-jrvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.618696 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data" (OuterVolumeSpecName: "config-data") pod "d16daa8a-8657-4552-802a-ca7e557f1d4f" (UID: "d16daa8a-8657-4552-802a-ca7e557f1d4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.620917 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d16daa8a-8657-4552-802a-ca7e557f1d4f" (UID: "d16daa8a-8657-4552-802a-ca7e557f1d4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.685613 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d16daa8a-8657-4552-802a-ca7e557f1d4f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.685657 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.685666 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16daa8a-8657-4552-802a-ca7e557f1d4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.685677 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrvg5\" (UniqueName: \"kubernetes.io/projected/d16daa8a-8657-4552-802a-ca7e557f1d4f-kube-api-access-jrvg5\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.871792 4768 generic.go:334] "Generic (PLEG): container finished" podID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerID="299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0" exitCode=0 Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.872621 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.873558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerDied","Data":"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0"} Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.873629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d16daa8a-8657-4552-802a-ca7e557f1d4f","Type":"ContainerDied","Data":"5a1093d24bdc757e1865d097ca5bf6b58da7c32fac275dba042cdae5a450bcee"} Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.873654 4768 scope.go:117] "RemoveContainer" containerID="299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.914537 4768 scope.go:117] "RemoveContainer" containerID="12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.929860 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.955207 4768 scope.go:117] "RemoveContainer" containerID="299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.955508 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:05 crc kubenswrapper[4768]: E1124 17:11:05.955769 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0\": container with ID starting with 299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0 not found: ID does not exist" containerID="299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.955831 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0"} err="failed to get container status \"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0\": rpc error: code = NotFound desc = could not find container \"299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0\": container with ID starting with 299690f5d5b61803d59d2d2d1532971b6d040765cf98b5df9d0e6061ae6873c0 not found: ID does not exist" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.955868 4768 scope.go:117] "RemoveContainer" containerID="12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e" Nov 24 17:11:05 crc kubenswrapper[4768]: E1124 17:11:05.956246 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e\": container with ID starting with 12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e not found: ID does not exist" containerID="12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.956282 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e"} err="failed to get container status \"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e\": rpc error: code = NotFound desc = could not find container \"12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e\": container with ID starting with 12b49de96e6553235b78ec119176ec7243a14be0d6cb9f5ebc94bea1ff428c2e not found: ID does not exist" Nov 24 17:11:05 crc kubenswrapper[4768]: W1124 17:11:05.962002 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c79e6bf_ae03_4b73_9e78_d55aa9e05cd9.slice/crio-882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27 WatchSource:0}: Error finding container 882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27: Status 404 returned error can't find the container with id 882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27 Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.969172 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6hqpk"] Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.977742 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:05 crc kubenswrapper[4768]: E1124 17:11:05.978111 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-log" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.978130 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-log" Nov 24 17:11:05 crc kubenswrapper[4768]: E1124 17:11:05.978158 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-api" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.978165 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-api" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.978389 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-api" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.978403 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" containerName="nova-api-log" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.979306 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.981303 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.981838 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.983916 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 17:11:05 crc kubenswrapper[4768]: I1124 17:11:05.986624 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.095914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.096879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whzs\" (UniqueName: \"kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.097000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.097094 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.097212 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.097370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.199246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.199391 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5whzs\" (UniqueName: \"kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.199422 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.199969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.200080 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.200143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.200168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.203023 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.206738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.206823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.207122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.214679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5whzs\" (UniqueName: \"kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs\") pod \"nova-api-0\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.294175 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:06 crc kubenswrapper[4768]: W1124 17:11:06.855059 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a810d5_7ca5_496d_b8ec_63be9b26eb8a.slice/crio-a8027a7a0b83800c668d160dc3bd575445f3b7108b698482d2bbcf80150b7f56 WatchSource:0}: Error finding container a8027a7a0b83800c668d160dc3bd575445f3b7108b698482d2bbcf80150b7f56: Status 404 returned error can't find the container with id a8027a7a0b83800c668d160dc3bd575445f3b7108b698482d2bbcf80150b7f56 Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.858662 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.882744 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6hqpk" event={"ID":"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9","Type":"ContainerStarted","Data":"f6779e2c110cc57a2aea1551e82bd04afb6cc30a7fa2816b593b46c271eeaa25"} Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.882796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6hqpk" event={"ID":"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9","Type":"ContainerStarted","Data":"882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27"} Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.885399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerStarted","Data":"a8027a7a0b83800c668d160dc3bd575445f3b7108b698482d2bbcf80150b7f56"} Nov 24 17:11:06 crc kubenswrapper[4768]: I1124 17:11:06.906169 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6hqpk" podStartSLOduration=1.906151071 podStartE2EDuration="1.906151071s" podCreationTimestamp="2025-11-24 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:06.900527172 +0000 UTC m=+1148.147495830" watchObservedRunningTime="2025-11-24 17:11:06.906151071 +0000 UTC m=+1148.153119729" Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.605245 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16daa8a-8657-4552-802a-ca7e557f1d4f" path="/var/lib/kubelet/pods/d16daa8a-8657-4552-802a-ca7e557f1d4f/volumes" Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.903980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerStarted","Data":"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202"} Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.904287 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerStarted","Data":"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656"} Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.908623 4768 generic.go:334] "Generic (PLEG): container finished" podID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerID="fa5743b3d151ab345867bca12752ae4c8697a767c2ae73b04e69a4f2df6a6e7a" exitCode=0 Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.909183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerDied","Data":"fa5743b3d151ab345867bca12752ae4c8697a767c2ae73b04e69a4f2df6a6e7a"} Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.909210 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fc2880d-6b67-430a-8a36-6339821b2fb0","Type":"ContainerDied","Data":"7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102"} Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.909220 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7936207b37e4f2dfe10228ec1dff30be20f9012513f9b83ff242deda1c190102" Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.929676 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.929655752 podStartE2EDuration="2.929655752s" podCreationTimestamp="2025-11-24 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:07.920212265 +0000 UTC m=+1149.167180923" watchObservedRunningTime="2025-11-24 17:11:07.929655752 +0000 UTC m=+1149.176624410" Nov 24 17:11:07 crc kubenswrapper[4768]: I1124 17:11:07.960059 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045457 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045685 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045732 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssjft\" (UniqueName: \"kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045795 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045831 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.045872 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle\") pod \"8fc2880d-6b67-430a-8a36-6339821b2fb0\" (UID: \"8fc2880d-6b67-430a-8a36-6339821b2fb0\") " Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.063992 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.067294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.072741 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts" (OuterVolumeSpecName: "scripts") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.083752 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft" (OuterVolumeSpecName: "kube-api-access-ssjft") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "kube-api-access-ssjft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.104317 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.148758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151214 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151242 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssjft\" (UniqueName: \"kubernetes.io/projected/8fc2880d-6b67-430a-8a36-6339821b2fb0-kube-api-access-ssjft\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151253 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151262 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fc2880d-6b67-430a-8a36-6339821b2fb0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151269 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.151277 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.162508 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.179185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data" (OuterVolumeSpecName: "config-data") pod "8fc2880d-6b67-430a-8a36-6339821b2fb0" (UID: "8fc2880d-6b67-430a-8a36-6339821b2fb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.253066 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.253099 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fc2880d-6b67-430a-8a36-6339821b2fb0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.918794 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.965541 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.975426 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.987467 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:08 crc kubenswrapper[4768]: E1124 17:11:08.987942 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-notification-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.987972 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-notification-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: E1124 17:11:08.988001 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-central-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988013 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-central-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: E1124 17:11:08.988029 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="sg-core" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988040 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="sg-core" Nov 24 17:11:08 crc kubenswrapper[4768]: E1124 17:11:08.988078 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="proxy-httpd" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988090 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="proxy-httpd" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988506 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="proxy-httpd" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988546 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="sg-core" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988567 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-notification-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.988597 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" containerName="ceilometer-central-agent" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.993180 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.994981 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.995374 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 17:11:08 crc kubenswrapper[4768]: I1124 17:11:08.995551 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.014502 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.066928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljp2\" (UniqueName: \"kubernetes.io/projected/509cc4fd-7197-418e-9536-6024e2a95f58-kube-api-access-5ljp2\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-config-data\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067747 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-scripts\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067854 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-log-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067885 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.067917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-run-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-scripts\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-log-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169744 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-run-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljp2\" (UniqueName: \"kubernetes.io/projected/509cc4fd-7197-418e-9536-6024e2a95f58-kube-api-access-5ljp2\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-config-data\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.169897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.171022 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-log-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.171128 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/509cc4fd-7197-418e-9536-6024e2a95f58-run-httpd\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.174394 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-scripts\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.185065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.185272 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.185544 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-config-data\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.191660 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljp2\" (UniqueName: \"kubernetes.io/projected/509cc4fd-7197-418e-9536-6024e2a95f58-kube-api-access-5ljp2\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.199614 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509cc4fd-7197-418e-9536-6024e2a95f58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"509cc4fd-7197-418e-9536-6024e2a95f58\") " pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.308582 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.367133 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-xcjb4" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.431063 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.431518 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="dnsmasq-dns" containerID="cri-o://bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c" gracePeriod=10 Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.599479 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc2880d-6b67-430a-8a36-6339821b2fb0" path="/var/lib/kubelet/pods/8fc2880d-6b67-430a-8a36-6339821b2fb0/volumes" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.837071 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.867902 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.927286 4768 generic.go:334] "Generic (PLEG): container finished" podID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerID="bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c" exitCode=0 Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.927554 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.928185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" event={"ID":"594abc42-5146-4e9e-b9ed-a2c4e74de54b","Type":"ContainerDied","Data":"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c"} Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.928211 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-w52kt" event={"ID":"594abc42-5146-4e9e-b9ed-a2c4e74de54b","Type":"ContainerDied","Data":"e425b8cc4e339ee2d6da9f7b84346805bd132bf881192c2ddcc492292b341577"} Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.928226 4768 scope.go:117] "RemoveContainer" containerID="bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.929585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"509cc4fd-7197-418e-9536-6024e2a95f58","Type":"ContainerStarted","Data":"11bd34c7340bcf69670b9f9f22c7c039c023ec32107e8568fc48851766f9e961"} Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.950012 4768 scope.go:117] "RemoveContainer" containerID="b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.968628 4768 scope.go:117] "RemoveContainer" containerID="bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c" Nov 24 17:11:09 crc kubenswrapper[4768]: E1124 17:11:09.969041 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c\": container with ID starting with bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c not found: ID does not exist" containerID="bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.969099 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c"} err="failed to get container status \"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c\": rpc error: code = NotFound desc = could not find container \"bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c\": container with ID starting with bd7257e6870ac84a54ce1c5eca8dbe774690d39875e28eeceab56162fa83764c not found: ID does not exist" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.969133 4768 scope.go:117] "RemoveContainer" containerID="b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1" Nov 24 17:11:09 crc kubenswrapper[4768]: E1124 17:11:09.970526 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1\": container with ID starting with b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1 not found: ID does not exist" containerID="b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.970556 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1"} err="failed to get container status \"b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1\": rpc error: code = NotFound desc = could not find container \"b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1\": container with ID starting with b1271e23e6b3f996923ee7848b69739b227ce87564abb1dbd4cc5867a3234ef1 not found: ID does not exist" Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.992658 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.992758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.992798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.992889 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.993044 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5j4\" (UniqueName: \"kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.993069 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc\") pod \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\" (UID: \"594abc42-5146-4e9e-b9ed-a2c4e74de54b\") " Nov 24 17:11:09 crc kubenswrapper[4768]: I1124 17:11:09.998397 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4" (OuterVolumeSpecName: "kube-api-access-8z5j4") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "kube-api-access-8z5j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.041924 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.048469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.049193 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.061949 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.068780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config" (OuterVolumeSpecName: "config") pod "594abc42-5146-4e9e-b9ed-a2c4e74de54b" (UID: "594abc42-5146-4e9e-b9ed-a2c4e74de54b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094837 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z5j4\" (UniqueName: \"kubernetes.io/projected/594abc42-5146-4e9e-b9ed-a2c4e74de54b-kube-api-access-8z5j4\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094860 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094868 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094876 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094884 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.094893 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/594abc42-5146-4e9e-b9ed-a2c4e74de54b-config\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.271021 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:11:10 crc kubenswrapper[4768]: E1124 17:11:10.284843 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod594abc42_5146_4e9e_b9ed_a2c4e74de54b.slice\": RecentStats: unable to find data in memory cache]" Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.289500 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-w52kt"] Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.944256 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"509cc4fd-7197-418e-9536-6024e2a95f58","Type":"ContainerStarted","Data":"ee90391fe57f741ff0e22d4b8e41078f21d3afbd9c1b84976031c37ba2d8435b"} Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.947685 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" containerID="f6779e2c110cc57a2aea1551e82bd04afb6cc30a7fa2816b593b46c271eeaa25" exitCode=0 Nov 24 17:11:10 crc kubenswrapper[4768]: I1124 17:11:10.947735 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6hqpk" event={"ID":"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9","Type":"ContainerDied","Data":"f6779e2c110cc57a2aea1551e82bd04afb6cc30a7fa2816b593b46c271eeaa25"} Nov 24 17:11:11 crc kubenswrapper[4768]: I1124 17:11:11.594788 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" path="/var/lib/kubelet/pods/594abc42-5146-4e9e-b9ed-a2c4e74de54b/volumes" Nov 24 17:11:11 crc kubenswrapper[4768]: I1124 17:11:11.998794 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"509cc4fd-7197-418e-9536-6024e2a95f58","Type":"ContainerStarted","Data":"3feda75935b069539da65e5c8eebccd1e445bfd867820d3da0874f2528124ed1"} Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.388727 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.545727 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle\") pod \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.545812 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6d8x\" (UniqueName: \"kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x\") pod \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.545980 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts\") pod \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.546045 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data\") pod \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\" (UID: \"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9\") " Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.551481 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts" (OuterVolumeSpecName: "scripts") pod "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" (UID: "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.568758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x" (OuterVolumeSpecName: "kube-api-access-k6d8x") pod "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" (UID: "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9"). InnerVolumeSpecName "kube-api-access-k6d8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.576139 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data" (OuterVolumeSpecName: "config-data") pod "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" (UID: "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.586206 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" (UID: "1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.647844 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.647871 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.647881 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:12 crc kubenswrapper[4768]: I1124 17:11:12.647928 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6d8x\" (UniqueName: \"kubernetes.io/projected/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9-kube-api-access-k6d8x\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.013397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"509cc4fd-7197-418e-9536-6024e2a95f58","Type":"ContainerStarted","Data":"dea7cd52e0823e97ed9c709439dea9ae2a2a256d82d583b8da7c163322a063c3"} Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.019703 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6hqpk" event={"ID":"1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9","Type":"ContainerDied","Data":"882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27"} Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.019744 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="882c8945f393d2b607b34c2ccb5329e0fd9b884ff7acdbea5157a68c2aeedd27" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.019799 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6hqpk" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.136519 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.136782 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-log" containerID="cri-o://0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" gracePeriod=30 Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.136828 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-api" containerID="cri-o://6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" gracePeriod=30 Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.153189 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.153751 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3a540e3c-2ee6-4c19-955a-40e614b40dce" containerName="nova-scheduler-scheduler" containerID="cri-o://c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a" gracePeriod=30 Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.171275 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.171543 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-log" containerID="cri-o://55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b" gracePeriod=30 Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.171689 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-metadata" containerID="cri-o://4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889" gracePeriod=30 Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.748422 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.873330 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.873996 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.874192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.874306 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.874460 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5whzs\" (UniqueName: \"kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.874669 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs\") pod \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\" (UID: \"48a810d5-7ca5-496d-b8ec-63be9b26eb8a\") " Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.875645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs" (OuterVolumeSpecName: "logs") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.882538 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs" (OuterVolumeSpecName: "kube-api-access-5whzs") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "kube-api-access-5whzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.909527 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.915221 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data" (OuterVolumeSpecName: "config-data") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.929328 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.958820 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48a810d5-7ca5-496d-b8ec-63be9b26eb8a" (UID: "48a810d5-7ca5-496d-b8ec-63be9b26eb8a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977046 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977209 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977299 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977398 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977498 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:13 crc kubenswrapper[4768]: I1124 17:11:13.977584 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5whzs\" (UniqueName: \"kubernetes.io/projected/48a810d5-7ca5-496d-b8ec-63be9b26eb8a-kube-api-access-5whzs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.033017 4768 generic.go:334] "Generic (PLEG): container finished" podID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerID="6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" exitCode=0 Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.034067 4768 generic.go:334] "Generic (PLEG): container finished" podID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerID="0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" exitCode=143 Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.033096 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerDied","Data":"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202"} Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.034495 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerDied","Data":"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656"} Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.034615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48a810d5-7ca5-496d-b8ec-63be9b26eb8a","Type":"ContainerDied","Data":"a8027a7a0b83800c668d160dc3bd575445f3b7108b698482d2bbcf80150b7f56"} Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.034537 4768 scope.go:117] "RemoveContainer" containerID="6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.033079 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.038024 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"509cc4fd-7197-418e-9536-6024e2a95f58","Type":"ContainerStarted","Data":"2139cf3a00506d5109f235c3ab98b200b2f5429ac4275bc37413d47ae153a54e"} Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.038118 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.041390 4768 generic.go:334] "Generic (PLEG): container finished" podID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerID="55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b" exitCode=143 Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.041447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerDied","Data":"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b"} Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.064564 4768 scope.go:117] "RemoveContainer" containerID="0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.065229 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.090552603 podStartE2EDuration="6.065216081s" podCreationTimestamp="2025-11-24 17:11:08 +0000 UTC" firstStartedPulling="2025-11-24 17:11:09.848264724 +0000 UTC m=+1151.095233382" lastFinishedPulling="2025-11-24 17:11:13.822928202 +0000 UTC m=+1155.069896860" observedRunningTime="2025-11-24 17:11:14.053608293 +0000 UTC m=+1155.300576961" watchObservedRunningTime="2025-11-24 17:11:14.065216081 +0000 UTC m=+1155.312184739" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.094434 4768 scope.go:117] "RemoveContainer" containerID="6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.095391 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202\": container with ID starting with 6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202 not found: ID does not exist" containerID="6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.095435 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202"} err="failed to get container status \"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202\": rpc error: code = NotFound desc = could not find container \"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202\": container with ID starting with 6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202 not found: ID does not exist" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.095461 4768 scope.go:117] "RemoveContainer" containerID="0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.096280 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656\": container with ID starting with 0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656 not found: ID does not exist" containerID="0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.096306 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656"} err="failed to get container status \"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656\": rpc error: code = NotFound desc = could not find container \"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656\": container with ID starting with 0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656 not found: ID does not exist" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.096325 4768 scope.go:117] "RemoveContainer" containerID="6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.097078 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202"} err="failed to get container status \"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202\": rpc error: code = NotFound desc = could not find container \"6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202\": container with ID starting with 6aaead22b051ffdb8b02f8911203de593fd7ffdd76623925bfea52508314c202 not found: ID does not exist" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.097107 4768 scope.go:117] "RemoveContainer" containerID="0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.097562 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656"} err="failed to get container status \"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656\": rpc error: code = NotFound desc = could not find container \"0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656\": container with ID starting with 0440a0e7e82b2db26ae37e7abb1ace0410c558afbf18fbe61fdaf919dccce656 not found: ID does not exist" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.104464 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.132418 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141398 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.141834 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="dnsmasq-dns" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141854 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="dnsmasq-dns" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.141868 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="init" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141875 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="init" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.141902 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-api" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141908 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-api" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.141928 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" containerName="nova-manage" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141933 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" containerName="nova-manage" Nov 24 17:11:14 crc kubenswrapper[4768]: E1124 17:11:14.141942 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-log" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.141948 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-log" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.142123 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-log" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.142141 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="594abc42-5146-4e9e-b9ed-a2c4e74de54b" containerName="dnsmasq-dns" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.142159 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" containerName="nova-api-api" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.142172 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" containerName="nova-manage" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.143185 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.145861 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.145990 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.147803 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.148842 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-config-data\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9be604-c179-43ac-b565-428652071d6e-logs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.286822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhq2\" (UniqueName: \"kubernetes.io/projected/2f9be604-c179-43ac-b565-428652071d6e-kube-api-access-hzhq2\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.388735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.388849 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-config-data\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.388952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.388989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.389056 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9be604-c179-43ac-b565-428652071d6e-logs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.389171 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhq2\" (UniqueName: \"kubernetes.io/projected/2f9be604-c179-43ac-b565-428652071d6e-kube-api-access-hzhq2\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.389882 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f9be604-c179-43ac-b565-428652071d6e-logs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.393655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.393973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.395387 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.396804 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f9be604-c179-43ac-b565-428652071d6e-config-data\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.406890 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhq2\" (UniqueName: \"kubernetes.io/projected/2f9be604-c179-43ac-b565-428652071d6e-kube-api-access-hzhq2\") pod \"nova-api-0\" (UID: \"2f9be604-c179-43ac-b565-428652071d6e\") " pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.469448 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.804851 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.899088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px9fh\" (UniqueName: \"kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh\") pod \"3a540e3c-2ee6-4c19-955a-40e614b40dce\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.899376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle\") pod \"3a540e3c-2ee6-4c19-955a-40e614b40dce\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.899436 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data\") pod \"3a540e3c-2ee6-4c19-955a-40e614b40dce\" (UID: \"3a540e3c-2ee6-4c19-955a-40e614b40dce\") " Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.905077 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh" (OuterVolumeSpecName: "kube-api-access-px9fh") pod "3a540e3c-2ee6-4c19-955a-40e614b40dce" (UID: "3a540e3c-2ee6-4c19-955a-40e614b40dce"). InnerVolumeSpecName "kube-api-access-px9fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.927623 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a540e3c-2ee6-4c19-955a-40e614b40dce" (UID: "3a540e3c-2ee6-4c19-955a-40e614b40dce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:14 crc kubenswrapper[4768]: I1124 17:11:14.935013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data" (OuterVolumeSpecName: "config-data") pod "3a540e3c-2ee6-4c19-955a-40e614b40dce" (UID: "3a540e3c-2ee6-4c19-955a-40e614b40dce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.001340 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.001392 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px9fh\" (UniqueName: \"kubernetes.io/projected/3a540e3c-2ee6-4c19-955a-40e614b40dce-kube-api-access-px9fh\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.001410 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a540e3c-2ee6-4c19-955a-40e614b40dce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.008739 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 17:11:15 crc kubenswrapper[4768]: W1124 17:11:15.010613 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f9be604_c179_43ac_b565_428652071d6e.slice/crio-1c024691f7703de3abbfc903cc4ff9f011939b4df67d16f2ee74095cfac5d619 WatchSource:0}: Error finding container 1c024691f7703de3abbfc903cc4ff9f011939b4df67d16f2ee74095cfac5d619: Status 404 returned error can't find the container with id 1c024691f7703de3abbfc903cc4ff9f011939b4df67d16f2ee74095cfac5d619 Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.053735 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9be604-c179-43ac-b565-428652071d6e","Type":"ContainerStarted","Data":"1c024691f7703de3abbfc903cc4ff9f011939b4df67d16f2ee74095cfac5d619"} Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.058800 4768 generic.go:334] "Generic (PLEG): container finished" podID="3a540e3c-2ee6-4c19-955a-40e614b40dce" containerID="c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a" exitCode=0 Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.058844 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a540e3c-2ee6-4c19-955a-40e614b40dce","Type":"ContainerDied","Data":"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a"} Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.058863 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3a540e3c-2ee6-4c19-955a-40e614b40dce","Type":"ContainerDied","Data":"1900d90a7359c9b3bb354a7128132df3f90dffa3012b3d6cc4ef2f3010d706ac"} Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.058879 4768 scope.go:117] "RemoveContainer" containerID="c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.058950 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.106109 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.122566 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.127115 4768 scope.go:117] "RemoveContainer" containerID="c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a" Nov 24 17:11:15 crc kubenswrapper[4768]: E1124 17:11:15.127595 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a\": container with ID starting with c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a not found: ID does not exist" containerID="c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.127623 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a"} err="failed to get container status \"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a\": rpc error: code = NotFound desc = could not find container \"c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a\": container with ID starting with c191b19113ee6a891f24169d0e41850e85033c025e145757131b78121c85182a not found: ID does not exist" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.132469 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:15 crc kubenswrapper[4768]: E1124 17:11:15.132817 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a540e3c-2ee6-4c19-955a-40e614b40dce" containerName="nova-scheduler-scheduler" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.132832 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a540e3c-2ee6-4c19-955a-40e614b40dce" containerName="nova-scheduler-scheduler" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.133014 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a540e3c-2ee6-4c19-955a-40e614b40dce" containerName="nova-scheduler-scheduler" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.133656 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.140644 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.141717 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.305935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.306324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pmh\" (UniqueName: \"kubernetes.io/projected/589aaf7d-1ce5-4a36-9501-b91900237cb4-kube-api-access-r8pmh\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.306553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-config-data\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.407936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.408064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pmh\" (UniqueName: \"kubernetes.io/projected/589aaf7d-1ce5-4a36-9501-b91900237cb4-kube-api-access-r8pmh\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.408173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-config-data\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.420020 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.420507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/589aaf7d-1ce5-4a36-9501-b91900237cb4-config-data\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.428672 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pmh\" (UniqueName: \"kubernetes.io/projected/589aaf7d-1ce5-4a36-9501-b91900237cb4-kube-api-access-r8pmh\") pod \"nova-scheduler-0\" (UID: \"589aaf7d-1ce5-4a36-9501-b91900237cb4\") " pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.463343 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.601554 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a540e3c-2ee6-4c19-955a-40e614b40dce" path="/var/lib/kubelet/pods/3a540e3c-2ee6-4c19-955a-40e614b40dce/volumes" Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.602777 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a810d5-7ca5-496d-b8ec-63be9b26eb8a" path="/var/lib/kubelet/pods/48a810d5-7ca5-496d-b8ec-63be9b26eb8a/volumes" Nov 24 17:11:15 crc kubenswrapper[4768]: W1124 17:11:15.968616 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod589aaf7d_1ce5_4a36_9501_b91900237cb4.slice/crio-d23a62f3c60a8190cafccfcc012a4a3c659c39d7637ff9d3105879da45da7662 WatchSource:0}: Error finding container d23a62f3c60a8190cafccfcc012a4a3c659c39d7637ff9d3105879da45da7662: Status 404 returned error can't find the container with id d23a62f3c60a8190cafccfcc012a4a3c659c39d7637ff9d3105879da45da7662 Nov 24 17:11:15 crc kubenswrapper[4768]: I1124 17:11:15.969873 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.078111 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9be604-c179-43ac-b565-428652071d6e","Type":"ContainerStarted","Data":"953816a44037a1698c9b97585a94220a9cdfaf87fb22652f28554b524f75772d"} Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.078437 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f9be604-c179-43ac-b565-428652071d6e","Type":"ContainerStarted","Data":"e01c18eae177d7c584b4ad369acff43fb16c53810cfced27e2c9a41ecf29c2b1"} Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.081639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"589aaf7d-1ce5-4a36-9501-b91900237cb4","Type":"ContainerStarted","Data":"d23a62f3c60a8190cafccfcc012a4a3c659c39d7637ff9d3105879da45da7662"} Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.116709 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.116684829 podStartE2EDuration="2.116684829s" podCreationTimestamp="2025-11-24 17:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:16.113827208 +0000 UTC m=+1157.360795866" watchObservedRunningTime="2025-11-24 17:11:16.116684829 +0000 UTC m=+1157.363653497" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.768401 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.933600 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs\") pod \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.933800 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx7ch\" (UniqueName: \"kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch\") pod \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.933846 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs\") pod \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.933864 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle\") pod \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.933885 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data\") pod \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\" (UID: \"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593\") " Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.935556 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs" (OuterVolumeSpecName: "logs") pod "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" (UID: "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.939667 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch" (OuterVolumeSpecName: "kube-api-access-hx7ch") pod "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" (UID: "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593"). InnerVolumeSpecName "kube-api-access-hx7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.964241 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" (UID: "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.971783 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data" (OuterVolumeSpecName: "config-data") pod "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" (UID: "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:16 crc kubenswrapper[4768]: I1124 17:11:16.998945 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" (UID: "0eecf1aa-66b5-4d92-b0f0-08a04c9ed593"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.036301 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx7ch\" (UniqueName: \"kubernetes.io/projected/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-kube-api-access-hx7ch\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.036338 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-logs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.036360 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.036371 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.036379 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.097000 4768 generic.go:334] "Generic (PLEG): container finished" podID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerID="4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889" exitCode=0 Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.097072 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.097130 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerDied","Data":"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889"} Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.097161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0eecf1aa-66b5-4d92-b0f0-08a04c9ed593","Type":"ContainerDied","Data":"48616866c680e4a3304e67536d26bd39b64fd4a19b89af18accb8c0b7fcb7638"} Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.097180 4768 scope.go:117] "RemoveContainer" containerID="4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.102577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"589aaf7d-1ce5-4a36-9501-b91900237cb4","Type":"ContainerStarted","Data":"823587df96fac586b425876b0a9a99035d82accaad44c1bdadc12575be3ce42c"} Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.145876 4768 scope.go:117] "RemoveContainer" containerID="55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.174555 4768 scope.go:117] "RemoveContainer" containerID="4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889" Nov 24 17:11:17 crc kubenswrapper[4768]: E1124 17:11:17.175042 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889\": container with ID starting with 4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889 not found: ID does not exist" containerID="4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.175077 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889"} err="failed to get container status \"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889\": rpc error: code = NotFound desc = could not find container \"4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889\": container with ID starting with 4045c7948a0c343083b32bb762b2eda7ed60eb1593ed25b55276e6ba78331889 not found: ID does not exist" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.175100 4768 scope.go:117] "RemoveContainer" containerID="55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b" Nov 24 17:11:17 crc kubenswrapper[4768]: E1124 17:11:17.175446 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b\": container with ID starting with 55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b not found: ID does not exist" containerID="55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.175488 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b"} err="failed to get container status \"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b\": rpc error: code = NotFound desc = could not find container \"55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b\": container with ID starting with 55f98f46be87a3272f2e04d9e40a1f9281d313951e873ace80ce8383fdb46a6b not found: ID does not exist" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.175611 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.17559147 podStartE2EDuration="2.17559147s" podCreationTimestamp="2025-11-24 17:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:17.124504105 +0000 UTC m=+1158.371472783" watchObservedRunningTime="2025-11-24 17:11:17.17559147 +0000 UTC m=+1158.422560118" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.178651 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.185752 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.192377 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:17 crc kubenswrapper[4768]: E1124 17:11:17.192908 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-log" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.192930 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-log" Nov 24 17:11:17 crc kubenswrapper[4768]: E1124 17:11:17.192979 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-metadata" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.192989 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-metadata" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.193186 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-log" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.193205 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" containerName="nova-metadata-metadata" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.194287 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.196663 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.196821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.205446 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.344954 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxb5g\" (UniqueName: \"kubernetes.io/projected/972962b2-f34e-4ad2-825e-2be316ce2ec3-kube-api-access-bxb5g\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.345296 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.345424 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.345464 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-config-data\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.345536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972962b2-f34e-4ad2-825e-2be316ce2ec3-logs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.446774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxb5g\" (UniqueName: \"kubernetes.io/projected/972962b2-f34e-4ad2-825e-2be316ce2ec3-kube-api-access-bxb5g\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.446822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.446890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.446921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-config-data\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.446979 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972962b2-f34e-4ad2-825e-2be316ce2ec3-logs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.447466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972962b2-f34e-4ad2-825e-2be316ce2ec3-logs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.451190 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.452515 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-config-data\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.460750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/972962b2-f34e-4ad2-825e-2be316ce2ec3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.476710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxb5g\" (UniqueName: \"kubernetes.io/projected/972962b2-f34e-4ad2-825e-2be316ce2ec3-kube-api-access-bxb5g\") pod \"nova-metadata-0\" (UID: \"972962b2-f34e-4ad2-825e-2be316ce2ec3\") " pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.512037 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.594927 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eecf1aa-66b5-4d92-b0f0-08a04c9ed593" path="/var/lib/kubelet/pods/0eecf1aa-66b5-4d92-b0f0-08a04c9ed593/volumes" Nov 24 17:11:17 crc kubenswrapper[4768]: I1124 17:11:17.993523 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 17:11:18 crc kubenswrapper[4768]: I1124 17:11:18.108985 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"972962b2-f34e-4ad2-825e-2be316ce2ec3","Type":"ContainerStarted","Data":"fdc6d43b2499632b6ea1be22047769803df9358f4554cf2317195c3ae65d4b23"} Nov 24 17:11:19 crc kubenswrapper[4768]: I1124 17:11:19.123639 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"972962b2-f34e-4ad2-825e-2be316ce2ec3","Type":"ContainerStarted","Data":"123a8ab49dd0894abdb1d3f6b7dfd99b5cb7165b9b190425b5b67d6f57715fd4"} Nov 24 17:11:19 crc kubenswrapper[4768]: I1124 17:11:19.124018 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"972962b2-f34e-4ad2-825e-2be316ce2ec3","Type":"ContainerStarted","Data":"2219a51edc81bb4e8101cfd57cd611f05d2779c9d9d29076424a7501e15a49fe"} Nov 24 17:11:19 crc kubenswrapper[4768]: I1124 17:11:19.168749 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.168717978 podStartE2EDuration="2.168717978s" podCreationTimestamp="2025-11-24 17:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:11:19.148057144 +0000 UTC m=+1160.395025842" watchObservedRunningTime="2025-11-24 17:11:19.168717978 +0000 UTC m=+1160.415686666" Nov 24 17:11:20 crc kubenswrapper[4768]: I1124 17:11:20.464478 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 17:11:21 crc kubenswrapper[4768]: I1124 17:11:21.930228 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Nov 24 17:11:21 crc kubenswrapper[4768]: I1124 17:11:21.933413 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Nov 24 17:11:22 crc kubenswrapper[4768]: I1124 17:11:22.513688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:11:22 crc kubenswrapper[4768]: I1124 17:11:22.513958 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 17:11:24 crc kubenswrapper[4768]: I1124 17:11:24.470257 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:11:24 crc kubenswrapper[4768]: I1124 17:11:24.470898 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 17:11:25 crc kubenswrapper[4768]: I1124 17:11:25.465769 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 17:11:25 crc kubenswrapper[4768]: I1124 17:11:25.490558 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f9be604-c179-43ac-b565-428652071d6e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:11:25 crc kubenswrapper[4768]: I1124 17:11:25.490582 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2f9be604-c179-43ac-b565-428652071d6e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:11:25 crc kubenswrapper[4768]: I1124 17:11:25.497373 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 17:11:26 crc kubenswrapper[4768]: I1124 17:11:26.241052 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 17:11:27 crc kubenswrapper[4768]: I1124 17:11:27.513318 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 17:11:27 crc kubenswrapper[4768]: I1124 17:11:27.513446 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 17:11:28 crc kubenswrapper[4768]: I1124 17:11:28.531613 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="972962b2-f34e-4ad2-825e-2be316ce2ec3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:11:28 crc kubenswrapper[4768]: I1124 17:11:28.531631 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="972962b2-f34e-4ad2-825e-2be316ce2ec3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 17:11:34 crc kubenswrapper[4768]: I1124 17:11:34.478938 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 17:11:34 crc kubenswrapper[4768]: I1124 17:11:34.480105 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 17:11:34 crc kubenswrapper[4768]: I1124 17:11:34.481260 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 17:11:34 crc kubenswrapper[4768]: I1124 17:11:34.488428 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 17:11:35 crc kubenswrapper[4768]: I1124 17:11:35.308925 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 17:11:35 crc kubenswrapper[4768]: I1124 17:11:35.318744 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 17:11:37 crc kubenswrapper[4768]: I1124 17:11:37.523687 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 17:11:37 crc kubenswrapper[4768]: I1124 17:11:37.524294 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 17:11:37 crc kubenswrapper[4768]: I1124 17:11:37.530676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 17:11:37 crc kubenswrapper[4768]: I1124 17:11:37.532688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 17:11:39 crc kubenswrapper[4768]: I1124 17:11:39.325138 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 17:11:49 crc kubenswrapper[4768]: I1124 17:11:49.034614 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:11:49 crc kubenswrapper[4768]: I1124 17:11:49.980399 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:11:53 crc kubenswrapper[4768]: I1124 17:11:53.826540 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="rabbitmq" containerID="cri-o://af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427" gracePeriod=604796 Nov 24 17:11:53 crc kubenswrapper[4768]: I1124 17:11:53.943688 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="rabbitmq" containerID="cri-o://d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b" gracePeriod=604797 Nov 24 17:11:57 crc kubenswrapper[4768]: I1124 17:11:57.616426 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 24 17:11:57 crc kubenswrapper[4768]: I1124 17:11:57.970315 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.480334 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.553702 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.588264 4768 generic.go:334] "Generic (PLEG): container finished" podID="e47b81a6-f793-404b-9713-121732eea148" containerID="af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427" exitCode=0 Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.588324 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerDied","Data":"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427"} Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.588346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e47b81a6-f793-404b-9713-121732eea148","Type":"ContainerDied","Data":"592b29d157f1ff5827edde83f5779bb6e88274a1a27acf2e0569a1876ee1e688"} Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.588327 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.588376 4768 scope.go:117] "RemoveContainer" containerID="af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.596593 4768 generic.go:334] "Generic (PLEG): container finished" podID="4fcab967-8d79-401f-927b-8770680c9c30" containerID="d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b" exitCode=0 Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.596652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerDied","Data":"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b"} Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.596678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4fcab967-8d79-401f-927b-8770680c9c30","Type":"ContainerDied","Data":"8e212cf564c79cb2536b0aacd15fa20da8e915a55a75345fdb1e4b16e522642a"} Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.596741 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.617297 4768 scope.go:117] "RemoveContainer" containerID="f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.629022 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.629175 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.629920 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.629213 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxfm7\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630492 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630570 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630592 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.630633 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.631857 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.631137 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.631769 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.631956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.632288 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.632326 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.632603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.633081 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.633101 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.633155 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635337 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635427 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635462 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqdh\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh\") pod \"4fcab967-8d79-401f-927b-8770680c9c30\" (UID: \"4fcab967-8d79-401f-927b-8770680c9c30\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635490 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret\") pod \"e47b81a6-f793-404b-9713-121732eea148\" (UID: \"e47b81a6-f793-404b-9713-121732eea148\") " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.635933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.636058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.636062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7" (OuterVolumeSpecName: "kube-api-access-hxfm7") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "kube-api-access-hxfm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.638399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.640476 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info" (OuterVolumeSpecName: "pod-info") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.640496 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info" (OuterVolumeSpecName: "pod-info") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.641213 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.648639 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.648928 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4fcab967-8d79-401f-927b-8770680c9c30-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.648940 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.648948 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.649233 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.649272 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e47b81a6-f793-404b-9713-121732eea148-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.649754 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.649770 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.649779 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4fcab967-8d79-401f-927b-8770680c9c30-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652200 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652212 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxfm7\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-kube-api-access-hxfm7\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652220 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652229 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652499 4768 scope.go:117] "RemoveContainer" containerID="af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.652641 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.653139 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427\": container with ID starting with af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427 not found: ID does not exist" containerID="af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.653181 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427"} err="failed to get container status \"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427\": rpc error: code = NotFound desc = could not find container \"af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427\": container with ID starting with af4a4419e1bcd42d0f6fa4a2e672b995629fd1dab566c01c5c3990432f0ce427 not found: ID does not exist" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.653207 4768 scope.go:117] "RemoveContainer" containerID="f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.653659 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119\": container with ID starting with f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119 not found: ID does not exist" containerID="f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.653685 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119"} err="failed to get container status \"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119\": rpc error: code = NotFound desc = could not find container \"f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119\": container with ID starting with f36530dde2b99b84e29bb49231ba0ff767276f912fb94ca55d7acc740607a119 not found: ID does not exist" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.653699 4768 scope.go:117] "RemoveContainer" containerID="d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.654718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.662810 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh" (OuterVolumeSpecName: "kube-api-access-fzqdh") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "kube-api-access-fzqdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.695676 4768 scope.go:117] "RemoveContainer" containerID="8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.714498 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data" (OuterVolumeSpecName: "config-data") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.715836 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data" (OuterVolumeSpecName: "config-data") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.719924 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf" (OuterVolumeSpecName: "server-conf") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.721272 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.724278 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.744839 4768 scope.go:117] "RemoveContainer" containerID="d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.751518 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b\": container with ID starting with d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b not found: ID does not exist" containerID="d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.751582 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b"} err="failed to get container status \"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b\": rpc error: code = NotFound desc = could not find container \"d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b\": container with ID starting with d40a1bb3801bc2867de67406370a0e49460e48d512113a441db8a6b211f8081b not found: ID does not exist" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.751611 4768 scope.go:117] "RemoveContainer" containerID="8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753565 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753594 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqdh\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-kube-api-access-fzqdh\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753603 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753616 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753625 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e47b81a6-f793-404b-9713-121732eea148-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753633 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753641 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4fcab967-8d79-401f-927b-8770680c9c30-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.753649 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.754224 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c\": container with ID starting with 8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c not found: ID does not exist" containerID="8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.754250 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c"} err="failed to get container status \"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c\": rpc error: code = NotFound desc = could not find container \"8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c\": container with ID starting with 8b5ffa930480d1bcb82470ec566fa1e05afb25a0f5960ca653a351054ba0aa2c not found: ID does not exist" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.758933 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf" (OuterVolumeSpecName: "server-conf") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.797969 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e47b81a6-f793-404b-9713-121732eea148" (UID: "e47b81a6-f793-404b-9713-121732eea148"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.810404 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4fcab967-8d79-401f-927b-8770680c9c30" (UID: "4fcab967-8d79-401f-927b-8770680c9c30"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.855017 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e47b81a6-f793-404b-9713-121732eea148-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.855044 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4fcab967-8d79-401f-927b-8770680c9c30-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.855055 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e47b81a6-f793-404b-9713-121732eea148-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.921930 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.931667 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.947551 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.959853 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.971830 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.972323 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972359 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.972376 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="setup-container" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972386 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="setup-container" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.972403 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972410 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: E1124 17:12:00.972423 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="setup-container" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972431 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="setup-container" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972659 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47b81a6-f793-404b-9713-121732eea148" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.972692 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fcab967-8d79-401f-927b-8770680c9c30" containerName="rabbitmq" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.973868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.976265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.976446 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.976557 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l4ftf" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.977418 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.978389 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.980485 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.980908 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.986033 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.987912 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.992646 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.992863 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.993459 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.993643 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.993821 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-g9ntq" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.994006 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 17:12:00 crc kubenswrapper[4768]: I1124 17:12:00.994155 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.011926 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.022247 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060229 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xtg\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-kube-api-access-g6xtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060270 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060309 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060419 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060442 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf5db907-56c6-4254-8a98-0a6750fd0a07-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060491 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060575 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060620 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszfb\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-kube-api-access-dszfb\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf5db907-56c6-4254-8a98-0a6750fd0a07-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060704 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1be76b0-164b-4bd7-950a-38e512cb4d5a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060744 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060772 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060794 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.060818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1be76b0-164b-4bd7-950a-38e512cb4d5a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163104 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163186 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163214 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163265 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszfb\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-kube-api-access-dszfb\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163282 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163313 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf5db907-56c6-4254-8a98-0a6750fd0a07-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163335 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1be76b0-164b-4bd7-950a-38e512cb4d5a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163404 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163423 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1be76b0-164b-4bd7-950a-38e512cb4d5a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163489 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163508 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xtg\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-kube-api-access-g6xtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163578 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.163657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf5db907-56c6-4254-8a98-0a6750fd0a07-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.164499 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.169836 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf5db907-56c6-4254-8a98-0a6750fd0a07-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.169867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.169956 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.170269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.170518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-config-data\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.170655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.170982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.171414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b1be76b0-164b-4bd7-950a-38e512cb4d5a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.171727 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.172269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.172412 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.173824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf5db907-56c6-4254-8a98-0a6750fd0a07-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.174025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.176623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b1be76b0-164b-4bd7-950a-38e512cb4d5a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.178098 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf5db907-56c6-4254-8a98-0a6750fd0a07-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.178572 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b1be76b0-164b-4bd7-950a-38e512cb4d5a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.178944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.180221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.195921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.208996 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.212393 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xtg\" (UniqueName: \"kubernetes.io/projected/cf5db907-56c6-4254-8a98-0a6750fd0a07-kube-api-access-g6xtg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cf5db907-56c6-4254-8a98-0a6750fd0a07\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.232130 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszfb\" (UniqueName: \"kubernetes.io/projected/b1be76b0-164b-4bd7-950a-38e512cb4d5a-kube-api-access-dszfb\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.238821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"b1be76b0-164b-4bd7-950a-38e512cb4d5a\") " pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.291848 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.304937 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.590876 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fcab967-8d79-401f-927b-8770680c9c30" path="/var/lib/kubelet/pods/4fcab967-8d79-401f-927b-8770680c9c30/volumes" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.592151 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47b81a6-f793-404b-9713-121732eea148" path="/var/lib/kubelet/pods/e47b81a6-f793-404b-9713-121732eea148/volumes" Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.837918 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 17:12:01 crc kubenswrapper[4768]: I1124 17:12:01.896252 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 17:12:01 crc kubenswrapper[4768]: W1124 17:12:01.899068 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf5db907_56c6_4254_8a98_0a6750fd0a07.slice/crio-003467f26ce07b4f275dc36c84086264c59da06c8e4ab1da1c5e5783e0b1ffc6 WatchSource:0}: Error finding container 003467f26ce07b4f275dc36c84086264c59da06c8e4ab1da1c5e5783e0b1ffc6: Status 404 returned error can't find the container with id 003467f26ce07b4f275dc36c84086264c59da06c8e4ab1da1c5e5783e0b1ffc6 Nov 24 17:12:02 crc kubenswrapper[4768]: I1124 17:12:02.627337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf5db907-56c6-4254-8a98-0a6750fd0a07","Type":"ContainerStarted","Data":"003467f26ce07b4f275dc36c84086264c59da06c8e4ab1da1c5e5783e0b1ffc6"} Nov 24 17:12:02 crc kubenswrapper[4768]: I1124 17:12:02.628810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b1be76b0-164b-4bd7-950a-38e512cb4d5a","Type":"ContainerStarted","Data":"cfcd858b5088c80eff99219f2fc2d9dc3e740c6d22db4d359967e90bb78798a5"} Nov 24 17:12:03 crc kubenswrapper[4768]: I1124 17:12:03.657074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf5db907-56c6-4254-8a98-0a6750fd0a07","Type":"ContainerStarted","Data":"c8869ac147e3d7c2e28c26571c71e1be0269876515c9bd9581bbdb4f5f50fec3"} Nov 24 17:12:04 crc kubenswrapper[4768]: I1124 17:12:04.684109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b1be76b0-164b-4bd7-950a-38e512cb4d5a","Type":"ContainerStarted","Data":"f6fbcd1b67d36b8b1f8f707ba5501d310fb87611e10c85ec943198ffddcdbc43"} Nov 24 17:12:04 crc kubenswrapper[4768]: I1124 17:12:04.892723 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:12:04 crc kubenswrapper[4768]: I1124 17:12:04.892776 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:12:20 crc kubenswrapper[4768]: E1124 17:12:20.763104 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.58:57046->38.102.83.58:40487: write tcp 38.102.83.58:57046->38.102.83.58:40487: write: connection reset by peer Nov 24 17:12:34 crc kubenswrapper[4768]: I1124 17:12:34.893450 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:12:34 crc kubenswrapper[4768]: I1124 17:12:34.894113 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:12:35 crc kubenswrapper[4768]: I1124 17:12:35.737421 4768 generic.go:334] "Generic (PLEG): container finished" podID="b1be76b0-164b-4bd7-950a-38e512cb4d5a" containerID="f6fbcd1b67d36b8b1f8f707ba5501d310fb87611e10c85ec943198ffddcdbc43" exitCode=0 Nov 24 17:12:35 crc kubenswrapper[4768]: I1124 17:12:35.737489 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b1be76b0-164b-4bd7-950a-38e512cb4d5a","Type":"ContainerDied","Data":"f6fbcd1b67d36b8b1f8f707ba5501d310fb87611e10c85ec943198ffddcdbc43"} Nov 24 17:12:35 crc kubenswrapper[4768]: I1124 17:12:35.742122 4768 generic.go:334] "Generic (PLEG): container finished" podID="cf5db907-56c6-4254-8a98-0a6750fd0a07" containerID="c8869ac147e3d7c2e28c26571c71e1be0269876515c9bd9581bbdb4f5f50fec3" exitCode=0 Nov 24 17:12:35 crc kubenswrapper[4768]: I1124 17:12:35.742190 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf5db907-56c6-4254-8a98-0a6750fd0a07","Type":"ContainerDied","Data":"c8869ac147e3d7c2e28c26571c71e1be0269876515c9bd9581bbdb4f5f50fec3"} Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.752027 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cf5db907-56c6-4254-8a98-0a6750fd0a07","Type":"ContainerStarted","Data":"0faf7ff0c7eacf8e346f2b0b429aa043e8716a8f0dc653acdb8377d6cd01f89c"} Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.753712 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.757672 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b1be76b0-164b-4bd7-950a-38e512cb4d5a","Type":"ContainerStarted","Data":"96a120ed3958526aa845e50ba534ebbd995dad1272f59e9747d41bca015ecd58"} Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.758172 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.781635 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.78161749 podStartE2EDuration="36.78161749s" podCreationTimestamp="2025-11-24 17:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:12:36.780975112 +0000 UTC m=+1238.027943790" watchObservedRunningTime="2025-11-24 17:12:36.78161749 +0000 UTC m=+1238.028586148" Nov 24 17:12:36 crc kubenswrapper[4768]: I1124 17:12:36.810654 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.810633598 podStartE2EDuration="36.810633598s" podCreationTimestamp="2025-11-24 17:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:12:36.806556613 +0000 UTC m=+1238.053525271" watchObservedRunningTime="2025-11-24 17:12:36.810633598 +0000 UTC m=+1238.057602256" Nov 24 17:12:51 crc kubenswrapper[4768]: I1124 17:12:51.295758 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 17:12:51 crc kubenswrapper[4768]: I1124 17:12:51.309554 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 17:13:04 crc kubenswrapper[4768]: I1124 17:13:04.893485 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:13:04 crc kubenswrapper[4768]: I1124 17:13:04.894438 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:13:04 crc kubenswrapper[4768]: I1124 17:13:04.894517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:13:04 crc kubenswrapper[4768]: I1124 17:13:04.895585 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:13:04 crc kubenswrapper[4768]: I1124 17:13:04.895684 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d" gracePeriod=600 Nov 24 17:13:05 crc kubenswrapper[4768]: I1124 17:13:05.097871 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d" exitCode=0 Nov 24 17:13:05 crc kubenswrapper[4768]: I1124 17:13:05.098085 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d"} Nov 24 17:13:05 crc kubenswrapper[4768]: I1124 17:13:05.098792 4768 scope.go:117] "RemoveContainer" containerID="a2e9550255187c12513b9f3f9cfbe5c32ed6243e82d0531966cc6a07af83a0c7" Nov 24 17:13:06 crc kubenswrapper[4768]: I1124 17:13:06.111019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385"} Nov 24 17:13:24 crc kubenswrapper[4768]: I1124 17:13:24.662637 4768 scope.go:117] "RemoveContainer" containerID="a34c8f2c3ae2c660ac2228952301650a017b2e17658e6e697d51618819f3c7e9" Nov 24 17:14:24 crc kubenswrapper[4768]: I1124 17:14:24.752227 4768 scope.go:117] "RemoveContainer" containerID="9020b83ecd6f8712aac189211a99ec31f6291489c38a56d1fe94ef174a6bba28" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.148203 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br"] Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.150210 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.154031 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.154221 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.163171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br"] Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.308935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.309019 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xld\" (UniqueName: \"kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.309181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.413726 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.413796 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xld\" (UniqueName: \"kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.413875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.415004 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.428641 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.442507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xld\" (UniqueName: \"kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld\") pod \"collect-profiles-29400075-xw9br\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.465676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:00 crc kubenswrapper[4768]: I1124 17:15:00.890920 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br"] Nov 24 17:15:01 crc kubenswrapper[4768]: I1124 17:15:01.508220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" event={"ID":"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07","Type":"ContainerStarted","Data":"d8c24787b11cdd823b7a87fd07392c43eb3a0090434b54550ae53a8fe94f2497"} Nov 24 17:15:01 crc kubenswrapper[4768]: I1124 17:15:01.508626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" event={"ID":"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07","Type":"ContainerStarted","Data":"45bd9e4b64d66063aa7978b8324cbe0227eeb060521ee026911b3d02adc23d32"} Nov 24 17:15:01 crc kubenswrapper[4768]: I1124 17:15:01.532714 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" podStartSLOduration=1.532694341 podStartE2EDuration="1.532694341s" podCreationTimestamp="2025-11-24 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:15:01.524189394 +0000 UTC m=+1382.771158062" watchObservedRunningTime="2025-11-24 17:15:01.532694341 +0000 UTC m=+1382.779662999" Nov 24 17:15:02 crc kubenswrapper[4768]: I1124 17:15:02.518312 4768 generic.go:334] "Generic (PLEG): container finished" podID="c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" containerID="d8c24787b11cdd823b7a87fd07392c43eb3a0090434b54550ae53a8fe94f2497" exitCode=0 Nov 24 17:15:02 crc kubenswrapper[4768]: I1124 17:15:02.518389 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" event={"ID":"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07","Type":"ContainerDied","Data":"d8c24787b11cdd823b7a87fd07392c43eb3a0090434b54550ae53a8fe94f2497"} Nov 24 17:15:03 crc kubenswrapper[4768]: I1124 17:15:03.903084 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:03 crc kubenswrapper[4768]: I1124 17:15:03.996566 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume\") pod \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " Nov 24 17:15:03 crc kubenswrapper[4768]: I1124 17:15:03.996636 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume\") pod \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " Nov 24 17:15:03 crc kubenswrapper[4768]: I1124 17:15:03.996664 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xld\" (UniqueName: \"kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld\") pod \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\" (UID: \"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07\") " Nov 24 17:15:03 crc kubenswrapper[4768]: I1124 17:15:03.998901 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" (UID: "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.003713 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" (UID: "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.003717 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld" (OuterVolumeSpecName: "kube-api-access-x2xld") pod "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" (UID: "c5f45cfc-b0bd-4c2a-9408-51feb3eaac07"). InnerVolumeSpecName "kube-api-access-x2xld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.099367 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.099404 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.099419 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xld\" (UniqueName: \"kubernetes.io/projected/c5f45cfc-b0bd-4c2a-9408-51feb3eaac07-kube-api-access-x2xld\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.541903 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" event={"ID":"c5f45cfc-b0bd-4c2a-9408-51feb3eaac07","Type":"ContainerDied","Data":"45bd9e4b64d66063aa7978b8324cbe0227eeb060521ee026911b3d02adc23d32"} Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.542287 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bd9e4b64d66063aa7978b8324cbe0227eeb060521ee026911b3d02adc23d32" Nov 24 17:15:04 crc kubenswrapper[4768]: I1124 17:15:04.541972 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400075-xw9br" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.805160 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:24 crc kubenswrapper[4768]: E1124 17:15:24.806143 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" containerName="collect-profiles" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.806157 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" containerName="collect-profiles" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.806395 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f45cfc-b0bd-4c2a-9408-51feb3eaac07" containerName="collect-profiles" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.807745 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.815018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.902332 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxvg\" (UniqueName: \"kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.902421 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:24 crc kubenswrapper[4768]: I1124 17:15:24.902629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.004743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxvg\" (UniqueName: \"kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.004830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.004933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.005557 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.006183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.032114 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxvg\" (UniqueName: \"kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg\") pod \"redhat-marketplace-vgpls\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.128577 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.620109 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:25 crc kubenswrapper[4768]: I1124 17:15:25.791928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerStarted","Data":"78e7492b0d932873a370db1bbf3fc748489c4f18af783c033270d345d652692f"} Nov 24 17:15:26 crc kubenswrapper[4768]: I1124 17:15:26.803380 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4175a68-3893-426f-b944-489aee0c9af2" containerID="c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4" exitCode=0 Nov 24 17:15:26 crc kubenswrapper[4768]: I1124 17:15:26.803487 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerDied","Data":"c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4"} Nov 24 17:15:26 crc kubenswrapper[4768]: I1124 17:15:26.805495 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:15:27 crc kubenswrapper[4768]: I1124 17:15:27.814858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerStarted","Data":"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16"} Nov 24 17:15:28 crc kubenswrapper[4768]: I1124 17:15:28.824806 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4175a68-3893-426f-b944-489aee0c9af2" containerID="e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16" exitCode=0 Nov 24 17:15:28 crc kubenswrapper[4768]: I1124 17:15:28.825146 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerDied","Data":"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16"} Nov 24 17:15:29 crc kubenswrapper[4768]: I1124 17:15:29.836081 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerStarted","Data":"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06"} Nov 24 17:15:29 crc kubenswrapper[4768]: I1124 17:15:29.865220 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vgpls" podStartSLOduration=3.428416801 podStartE2EDuration="5.865201675s" podCreationTimestamp="2025-11-24 17:15:24 +0000 UTC" firstStartedPulling="2025-11-24 17:15:26.805270484 +0000 UTC m=+1408.052239142" lastFinishedPulling="2025-11-24 17:15:29.242055348 +0000 UTC m=+1410.489024016" observedRunningTime="2025-11-24 17:15:29.857749637 +0000 UTC m=+1411.104718295" watchObservedRunningTime="2025-11-24 17:15:29.865201675 +0000 UTC m=+1411.112170333" Nov 24 17:15:34 crc kubenswrapper[4768]: I1124 17:15:34.893062 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:15:34 crc kubenswrapper[4768]: I1124 17:15:34.893690 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.129142 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.129234 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.203011 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.537760 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.542860 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.554839 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.652806 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.652841 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgl2\" (UniqueName: \"kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.653024 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.754718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.754857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.754888 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtgl2\" (UniqueName: \"kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.756337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.756606 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.773736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtgl2\" (UniqueName: \"kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2\") pod \"certified-operators-mvf8j\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.890704 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:35 crc kubenswrapper[4768]: I1124 17:15:35.956781 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:36 crc kubenswrapper[4768]: I1124 17:15:36.433144 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:36 crc kubenswrapper[4768]: I1124 17:15:36.897397 4768 generic.go:334] "Generic (PLEG): container finished" podID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerID="e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb" exitCode=0 Nov 24 17:15:36 crc kubenswrapper[4768]: I1124 17:15:36.897568 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerDied","Data":"e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb"} Nov 24 17:15:36 crc kubenswrapper[4768]: I1124 17:15:36.897707 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerStarted","Data":"41a9733e1033903065f0ae3121bf5823a5246f295192e2ad5424dc413694cf0f"} Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.237601 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.238038 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vgpls" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="registry-server" containerID="cri-o://e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06" gracePeriod=2 Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.799055 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.913787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content\") pod \"c4175a68-3893-426f-b944-489aee0c9af2\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.913845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjxvg\" (UniqueName: \"kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg\") pod \"c4175a68-3893-426f-b944-489aee0c9af2\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.913899 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities\") pod \"c4175a68-3893-426f-b944-489aee0c9af2\" (UID: \"c4175a68-3893-426f-b944-489aee0c9af2\") " Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.914706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities" (OuterVolumeSpecName: "utilities") pod "c4175a68-3893-426f-b944-489aee0c9af2" (UID: "c4175a68-3893-426f-b944-489aee0c9af2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.915761 4768 generic.go:334] "Generic (PLEG): container finished" podID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerID="e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8" exitCode=0 Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.915835 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerDied","Data":"e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8"} Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.926386 4768 generic.go:334] "Generic (PLEG): container finished" podID="c4175a68-3893-426f-b944-489aee0c9af2" containerID="e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06" exitCode=0 Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.926439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerDied","Data":"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06"} Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.926482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgpls" event={"ID":"c4175a68-3893-426f-b944-489aee0c9af2","Type":"ContainerDied","Data":"78e7492b0d932873a370db1bbf3fc748489c4f18af783c033270d345d652692f"} Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.926486 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgpls" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.926500 4768 scope.go:117] "RemoveContainer" containerID="e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.930294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg" (OuterVolumeSpecName: "kube-api-access-jjxvg") pod "c4175a68-3893-426f-b944-489aee0c9af2" (UID: "c4175a68-3893-426f-b944-489aee0c9af2"). InnerVolumeSpecName "kube-api-access-jjxvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.947519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4175a68-3893-426f-b944-489aee0c9af2" (UID: "c4175a68-3893-426f-b944-489aee0c9af2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:15:38 crc kubenswrapper[4768]: I1124 17:15:38.986563 4768 scope.go:117] "RemoveContainer" containerID="e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.006469 4768 scope.go:117] "RemoveContainer" containerID="c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.016557 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.016590 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjxvg\" (UniqueName: \"kubernetes.io/projected/c4175a68-3893-426f-b944-489aee0c9af2-kube-api-access-jjxvg\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.016628 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4175a68-3893-426f-b944-489aee0c9af2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.028397 4768 scope.go:117] "RemoveContainer" containerID="e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06" Nov 24 17:15:39 crc kubenswrapper[4768]: E1124 17:15:39.028935 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06\": container with ID starting with e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06 not found: ID does not exist" containerID="e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.028963 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06"} err="failed to get container status \"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06\": rpc error: code = NotFound desc = could not find container \"e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06\": container with ID starting with e452a9174a14ce15e2562fce491ab45550cac951280ee005a10193d5a9e0cc06 not found: ID does not exist" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.028983 4768 scope.go:117] "RemoveContainer" containerID="e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16" Nov 24 17:15:39 crc kubenswrapper[4768]: E1124 17:15:39.029241 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16\": container with ID starting with e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16 not found: ID does not exist" containerID="e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.029259 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16"} err="failed to get container status \"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16\": rpc error: code = NotFound desc = could not find container \"e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16\": container with ID starting with e14c80d7d68cc52e48f37aca7838f8c1f1f73f77371100019aa49c0392449a16 not found: ID does not exist" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.029271 4768 scope.go:117] "RemoveContainer" containerID="c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4" Nov 24 17:15:39 crc kubenswrapper[4768]: E1124 17:15:39.029489 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4\": container with ID starting with c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4 not found: ID does not exist" containerID="c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.029517 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4"} err="failed to get container status \"c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4\": rpc error: code = NotFound desc = could not find container \"c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4\": container with ID starting with c27c42533728e4ff7683472c67a933649d8ef9bcad646a27293aeebaecd3bea4 not found: ID does not exist" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.263954 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.274000 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgpls"] Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.593123 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4175a68-3893-426f-b944-489aee0c9af2" path="/var/lib/kubelet/pods/c4175a68-3893-426f-b944-489aee0c9af2/volumes" Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.938795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerStarted","Data":"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34"} Nov 24 17:15:39 crc kubenswrapper[4768]: I1124 17:15:39.966483 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mvf8j" podStartSLOduration=2.536300948 podStartE2EDuration="4.966460496s" podCreationTimestamp="2025-11-24 17:15:35 +0000 UTC" firstStartedPulling="2025-11-24 17:15:36.899605394 +0000 UTC m=+1418.146574052" lastFinishedPulling="2025-11-24 17:15:39.329764942 +0000 UTC m=+1420.576733600" observedRunningTime="2025-11-24 17:15:39.958602318 +0000 UTC m=+1421.205570996" watchObservedRunningTime="2025-11-24 17:15:39.966460496 +0000 UTC m=+1421.213429154" Nov 24 17:15:45 crc kubenswrapper[4768]: I1124 17:15:45.891981 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:45 crc kubenswrapper[4768]: I1124 17:15:45.892658 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:45 crc kubenswrapper[4768]: I1124 17:15:45.956947 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:46 crc kubenswrapper[4768]: I1124 17:15:46.067218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:46 crc kubenswrapper[4768]: I1124 17:15:46.198687 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.023383 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mvf8j" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="registry-server" containerID="cri-o://692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34" gracePeriod=2 Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.526128 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.608133 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities\") pod \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.608318 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtgl2\" (UniqueName: \"kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2\") pod \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.608430 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content\") pod \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\" (UID: \"97d74b70-e8c0-4315-822f-3d75f6b4e6ae\") " Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.609551 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities" (OuterVolumeSpecName: "utilities") pod "97d74b70-e8c0-4315-822f-3d75f6b4e6ae" (UID: "97d74b70-e8c0-4315-822f-3d75f6b4e6ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.615487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2" (OuterVolumeSpecName: "kube-api-access-gtgl2") pod "97d74b70-e8c0-4315-822f-3d75f6b4e6ae" (UID: "97d74b70-e8c0-4315-822f-3d75f6b4e6ae"). InnerVolumeSpecName "kube-api-access-gtgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.664519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97d74b70-e8c0-4315-822f-3d75f6b4e6ae" (UID: "97d74b70-e8c0-4315-822f-3d75f6b4e6ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.710670 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtgl2\" (UniqueName: \"kubernetes.io/projected/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-kube-api-access-gtgl2\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.710840 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:48 crc kubenswrapper[4768]: I1124 17:15:48.710880 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d74b70-e8c0-4315-822f-3d75f6b4e6ae-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.032871 4768 generic.go:334] "Generic (PLEG): container finished" podID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerID="692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34" exitCode=0 Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.032919 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerDied","Data":"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34"} Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.032937 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mvf8j" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.032967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mvf8j" event={"ID":"97d74b70-e8c0-4315-822f-3d75f6b4e6ae","Type":"ContainerDied","Data":"41a9733e1033903065f0ae3121bf5823a5246f295192e2ad5424dc413694cf0f"} Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.032990 4768 scope.go:117] "RemoveContainer" containerID="692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.060324 4768 scope.go:117] "RemoveContainer" containerID="e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.062799 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.081612 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mvf8j"] Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.093278 4768 scope.go:117] "RemoveContainer" containerID="e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.129738 4768 scope.go:117] "RemoveContainer" containerID="692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34" Nov 24 17:15:49 crc kubenswrapper[4768]: E1124 17:15:49.130260 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34\": container with ID starting with 692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34 not found: ID does not exist" containerID="692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.130298 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34"} err="failed to get container status \"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34\": rpc error: code = NotFound desc = could not find container \"692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34\": container with ID starting with 692cc3a11d2283ce45488b2697ce35fbf44d1a8559e39ed337c9481a74206e34 not found: ID does not exist" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.130324 4768 scope.go:117] "RemoveContainer" containerID="e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8" Nov 24 17:15:49 crc kubenswrapper[4768]: E1124 17:15:49.130633 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8\": container with ID starting with e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8 not found: ID does not exist" containerID="e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.130655 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8"} err="failed to get container status \"e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8\": rpc error: code = NotFound desc = could not find container \"e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8\": container with ID starting with e4626aa291ff32c974b3cd58164fe04004b131aa9690ea847f25fb865cfe88d8 not found: ID does not exist" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.130671 4768 scope.go:117] "RemoveContainer" containerID="e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb" Nov 24 17:15:49 crc kubenswrapper[4768]: E1124 17:15:49.130924 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb\": container with ID starting with e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb not found: ID does not exist" containerID="e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.130945 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb"} err="failed to get container status \"e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb\": rpc error: code = NotFound desc = could not find container \"e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb\": container with ID starting with e39b7c398d8c823ac4c84abf48dd9f76ca16cc900ea5b0453f9b18ab77fa9fdb not found: ID does not exist" Nov 24 17:15:49 crc kubenswrapper[4768]: I1124 17:15:49.596845 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" path="/var/lib/kubelet/pods/97d74b70-e8c0-4315-822f-3d75f6b4e6ae/volumes" Nov 24 17:16:04 crc kubenswrapper[4768]: I1124 17:16:04.892871 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:16:04 crc kubenswrapper[4768]: I1124 17:16:04.893579 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:16:24 crc kubenswrapper[4768]: I1124 17:16:24.847447 4768 scope.go:117] "RemoveContainer" containerID="67808bd1850913efbd3bbeb884890ac15880e29fe5927306718df859a79d328d" Nov 24 17:16:24 crc kubenswrapper[4768]: I1124 17:16:24.872238 4768 scope.go:117] "RemoveContainer" containerID="fa5743b3d151ab345867bca12752ae4c8697a767c2ae73b04e69a4f2df6a6e7a" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.938600 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939628 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="extract-utilities" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939652 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="extract-utilities" Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939693 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="extract-utilities" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939704 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="extract-utilities" Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939750 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939763 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939783 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939795 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939822 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="extract-content" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939833 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="extract-content" Nov 24 17:16:25 crc kubenswrapper[4768]: E1124 17:16:25.939855 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="extract-content" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.939864 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="extract-content" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.940181 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d74b70-e8c0-4315-822f-3d75f6b4e6ae" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.940215 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4175a68-3893-426f-b944-489aee0c9af2" containerName="registry-server" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.942732 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:25 crc kubenswrapper[4768]: I1124 17:16:25.957124 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.050449 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.050739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq8t\" (UniqueName: \"kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.050828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.152480 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.152539 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbq8t\" (UniqueName: \"kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.152581 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.153132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.153248 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.178759 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbq8t\" (UniqueName: \"kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t\") pod \"redhat-operators-xqk6x\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.272015 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:26 crc kubenswrapper[4768]: I1124 17:16:26.782735 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:27 crc kubenswrapper[4768]: I1124 17:16:27.473560 4768 generic.go:334] "Generic (PLEG): container finished" podID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerID="2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279" exitCode=0 Nov 24 17:16:27 crc kubenswrapper[4768]: I1124 17:16:27.473620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerDied","Data":"2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279"} Nov 24 17:16:27 crc kubenswrapper[4768]: I1124 17:16:27.475135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerStarted","Data":"90935d9a678afb9043975cbbc6a73aae3469a3fb53591bbf03c41e460f74ddf6"} Nov 24 17:16:30 crc kubenswrapper[4768]: I1124 17:16:30.501847 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerStarted","Data":"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0"} Nov 24 17:16:31 crc kubenswrapper[4768]: I1124 17:16:31.512972 4768 generic.go:334] "Generic (PLEG): container finished" podID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerID="4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0" exitCode=0 Nov 24 17:16:31 crc kubenswrapper[4768]: I1124 17:16:31.513208 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerDied","Data":"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0"} Nov 24 17:16:34 crc kubenswrapper[4768]: I1124 17:16:34.893206 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:16:34 crc kubenswrapper[4768]: I1124 17:16:34.893728 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:16:34 crc kubenswrapper[4768]: I1124 17:16:34.893799 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:16:35 crc kubenswrapper[4768]: I1124 17:16:35.558646 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:16:35 crc kubenswrapper[4768]: I1124 17:16:35.559306 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385" gracePeriod=600 Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.570891 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385" exitCode=0 Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.571120 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385"} Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.573102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c"} Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.573134 4768 scope.go:117] "RemoveContainer" containerID="2365a36edb89edb46f3a062496f3dfcc63c2a3b858eedccc75acd6744646ba2d" Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.581421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerStarted","Data":"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438"} Nov 24 17:16:36 crc kubenswrapper[4768]: I1124 17:16:36.616817 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqk6x" podStartSLOduration=3.760093856 podStartE2EDuration="11.616800247s" podCreationTimestamp="2025-11-24 17:16:25 +0000 UTC" firstStartedPulling="2025-11-24 17:16:27.476275608 +0000 UTC m=+1468.723244296" lastFinishedPulling="2025-11-24 17:16:35.332981999 +0000 UTC m=+1476.579950687" observedRunningTime="2025-11-24 17:16:36.611676884 +0000 UTC m=+1477.858645552" watchObservedRunningTime="2025-11-24 17:16:36.616800247 +0000 UTC m=+1477.863768895" Nov 24 17:16:46 crc kubenswrapper[4768]: I1124 17:16:46.272894 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:46 crc kubenswrapper[4768]: I1124 17:16:46.273552 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:46 crc kubenswrapper[4768]: I1124 17:16:46.340485 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:46 crc kubenswrapper[4768]: I1124 17:16:46.716690 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:46 crc kubenswrapper[4768]: I1124 17:16:46.770984 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:48 crc kubenswrapper[4768]: I1124 17:16:48.712142 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqk6x" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="registry-server" containerID="cri-o://fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438" gracePeriod=2 Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.248092 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.411729 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities\") pod \"0c4cd069-b524-4ad5-a652-a8dc9223339f\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.412179 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbq8t\" (UniqueName: \"kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t\") pod \"0c4cd069-b524-4ad5-a652-a8dc9223339f\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.412290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content\") pod \"0c4cd069-b524-4ad5-a652-a8dc9223339f\" (UID: \"0c4cd069-b524-4ad5-a652-a8dc9223339f\") " Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.412437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities" (OuterVolumeSpecName: "utilities") pod "0c4cd069-b524-4ad5-a652-a8dc9223339f" (UID: "0c4cd069-b524-4ad5-a652-a8dc9223339f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.412939 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.418031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t" (OuterVolumeSpecName: "kube-api-access-pbq8t") pod "0c4cd069-b524-4ad5-a652-a8dc9223339f" (UID: "0c4cd069-b524-4ad5-a652-a8dc9223339f"). InnerVolumeSpecName "kube-api-access-pbq8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.499709 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c4cd069-b524-4ad5-a652-a8dc9223339f" (UID: "0c4cd069-b524-4ad5-a652-a8dc9223339f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.514650 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbq8t\" (UniqueName: \"kubernetes.io/projected/0c4cd069-b524-4ad5-a652-a8dc9223339f-kube-api-access-pbq8t\") on node \"crc\" DevicePath \"\"" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.514707 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cd069-b524-4ad5-a652-a8dc9223339f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.724480 4768 generic.go:334] "Generic (PLEG): container finished" podID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerID="fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438" exitCode=0 Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.724532 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqk6x" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.724545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerDied","Data":"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438"} Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.724597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqk6x" event={"ID":"0c4cd069-b524-4ad5-a652-a8dc9223339f","Type":"ContainerDied","Data":"90935d9a678afb9043975cbbc6a73aae3469a3fb53591bbf03c41e460f74ddf6"} Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.724628 4768 scope.go:117] "RemoveContainer" containerID="fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.752969 4768 scope.go:117] "RemoveContainer" containerID="4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.753617 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.764295 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqk6x"] Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.776577 4768 scope.go:117] "RemoveContainer" containerID="2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.822580 4768 scope.go:117] "RemoveContainer" containerID="fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438" Nov 24 17:16:49 crc kubenswrapper[4768]: E1124 17:16:49.823559 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438\": container with ID starting with fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438 not found: ID does not exist" containerID="fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.823601 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438"} err="failed to get container status \"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438\": rpc error: code = NotFound desc = could not find container \"fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438\": container with ID starting with fe8f7566762f513976ad7a568de5411dcf88dd610070ccd08f71aa9b00dd8438 not found: ID does not exist" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.823630 4768 scope.go:117] "RemoveContainer" containerID="4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0" Nov 24 17:16:49 crc kubenswrapper[4768]: E1124 17:16:49.824214 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0\": container with ID starting with 4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0 not found: ID does not exist" containerID="4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.824252 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0"} err="failed to get container status \"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0\": rpc error: code = NotFound desc = could not find container \"4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0\": container with ID starting with 4dca5672d8f27909a923714a8df572ac97185d3e2677ab0f309ec452b211deb0 not found: ID does not exist" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.824310 4768 scope.go:117] "RemoveContainer" containerID="2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279" Nov 24 17:16:49 crc kubenswrapper[4768]: E1124 17:16:49.824738 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279\": container with ID starting with 2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279 not found: ID does not exist" containerID="2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279" Nov 24 17:16:49 crc kubenswrapper[4768]: I1124 17:16:49.824765 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279"} err="failed to get container status \"2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279\": rpc error: code = NotFound desc = could not find container \"2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279\": container with ID starting with 2d5c4c979a8266236eafa27984f84c9961344dd6b7d274e3c3d7cee0ed4ef279 not found: ID does not exist" Nov 24 17:16:51 crc kubenswrapper[4768]: I1124 17:16:51.592308 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" path="/var/lib/kubelet/pods/0c4cd069-b524-4ad5-a652-a8dc9223339f/volumes" Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.045532 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d93f-account-create-9nz7l"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.057722 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8c2vc"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.068130 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-8hn9l"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.076340 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a945-account-create-lxxbd"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.085017 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a945-account-create-lxxbd"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.094066 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-8hn9l"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.102789 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8c2vc"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.110435 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d93f-account-create-9nz7l"] Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.592074 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339680b8-1c63-400b-92a2-5b3dff0d90f3" path="/var/lib/kubelet/pods/339680b8-1c63-400b-92a2-5b3dff0d90f3/volumes" Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.592938 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c5b0d6-d045-486b-88fa-32652a7d875f" path="/var/lib/kubelet/pods/71c5b0d6-d045-486b-88fa-32652a7d875f/volumes" Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.593576 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9e0002-d010-466b-9a99-c3a4ae2bf020" path="/var/lib/kubelet/pods/8f9e0002-d010-466b-9a99-c3a4ae2bf020/volumes" Nov 24 17:17:07 crc kubenswrapper[4768]: I1124 17:17:07.594229 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3089d03-ccf1-4aff-ac88-45fefc76ec67" path="/var/lib/kubelet/pods/d3089d03-ccf1-4aff-ac88-45fefc76ec67/volumes" Nov 24 17:17:10 crc kubenswrapper[4768]: I1124 17:17:10.032339 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xbvzf"] Nov 24 17:17:10 crc kubenswrapper[4768]: I1124 17:17:10.047147 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0131-account-create-h9pmh"] Nov 24 17:17:10 crc kubenswrapper[4768]: I1124 17:17:10.055571 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xbvzf"] Nov 24 17:17:10 crc kubenswrapper[4768]: I1124 17:17:10.064107 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0131-account-create-h9pmh"] Nov 24 17:17:11 crc kubenswrapper[4768]: I1124 17:17:11.600250 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e8ca38-334e-4dfd-a174-f02bfc8c69ec" path="/var/lib/kubelet/pods/11e8ca38-334e-4dfd-a174-f02bfc8c69ec/volumes" Nov 24 17:17:11 crc kubenswrapper[4768]: I1124 17:17:11.601398 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f06565-e23b-4e08-88d7-280cb402977a" path="/var/lib/kubelet/pods/34f06565-e23b-4e08-88d7-280cb402977a/volumes" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.610743 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:19 crc kubenswrapper[4768]: E1124 17:17:19.611719 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="extract-content" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.611736 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="extract-content" Nov 24 17:17:19 crc kubenswrapper[4768]: E1124 17:17:19.611760 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="registry-server" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.611769 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="registry-server" Nov 24 17:17:19 crc kubenswrapper[4768]: E1124 17:17:19.611793 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="extract-utilities" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.611801 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="extract-utilities" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.612037 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4cd069-b524-4ad5-a652-a8dc9223339f" containerName="registry-server" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.614038 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.626974 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.742318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.742601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.742869 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntp8j\" (UniqueName: \"kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.844776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.844822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.844910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntp8j\" (UniqueName: \"kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.845420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.845456 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.879268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntp8j\" (UniqueName: \"kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j\") pod \"community-operators-nlcs7\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:19 crc kubenswrapper[4768]: I1124 17:17:19.946207 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:20 crc kubenswrapper[4768]: I1124 17:17:20.479952 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:21 crc kubenswrapper[4768]: I1124 17:17:21.102917 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerStarted","Data":"f0f9c23b43f64b0f5b024804db5c6ed2bd801954ea0b43dc0322533a253898ba"} Nov 24 17:17:22 crc kubenswrapper[4768]: I1124 17:17:22.114700 4768 generic.go:334] "Generic (PLEG): container finished" podID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerID="f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a" exitCode=0 Nov 24 17:17:22 crc kubenswrapper[4768]: I1124 17:17:22.114780 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerDied","Data":"f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a"} Nov 24 17:17:24 crc kubenswrapper[4768]: I1124 17:17:24.975159 4768 scope.go:117] "RemoveContainer" containerID="801adc3560fc59757f8e5267028bca9275a811cd888099b04e60b3ed4da09040" Nov 24 17:17:25 crc kubenswrapper[4768]: I1124 17:17:25.509615 4768 scope.go:117] "RemoveContainer" containerID="93cf4dc7f399e68d6a09411dd327b3fee00a4c41065a0ac8c373a849a646dd88" Nov 24 17:17:25 crc kubenswrapper[4768]: I1124 17:17:25.847127 4768 scope.go:117] "RemoveContainer" containerID="6329f0262c24c4c4c2efc9adb7cc7abf77e9bada8184d4c76e2b12411c734bc6" Nov 24 17:17:25 crc kubenswrapper[4768]: I1124 17:17:25.885360 4768 scope.go:117] "RemoveContainer" containerID="971501cff335143c82645323507905d65f8733e81a258b8d9091fed616820cc4" Nov 24 17:17:25 crc kubenswrapper[4768]: I1124 17:17:25.932850 4768 scope.go:117] "RemoveContainer" containerID="292a685fa399858a0fdda12182f1d6d055e91cd8830ab63cb6b4290913e11862" Nov 24 17:17:26 crc kubenswrapper[4768]: I1124 17:17:26.002774 4768 scope.go:117] "RemoveContainer" containerID="72d42fc107c69087593f79543b8627ece8db95e6255248722c00b7dc89190c2f" Nov 24 17:17:26 crc kubenswrapper[4768]: I1124 17:17:26.076320 4768 scope.go:117] "RemoveContainer" containerID="c4f1b99ce5c335a3a2764b05fd6fb617c1e1c6f0a14cf5c5f7f77b60c8f75a2b" Nov 24 17:17:26 crc kubenswrapper[4768]: I1124 17:17:26.095637 4768 scope.go:117] "RemoveContainer" containerID="0df386c41bf1fc6cd8893daf074b61b3fef5b493d54f749b4ba325c81012da42" Nov 24 17:17:27 crc kubenswrapper[4768]: I1124 17:17:27.180579 4768 generic.go:334] "Generic (PLEG): container finished" podID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerID="54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096" exitCode=0 Nov 24 17:17:27 crc kubenswrapper[4768]: I1124 17:17:27.180675 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerDied","Data":"54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096"} Nov 24 17:17:29 crc kubenswrapper[4768]: I1124 17:17:29.205721 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerStarted","Data":"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404"} Nov 24 17:17:29 crc kubenswrapper[4768]: I1124 17:17:29.235584 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlcs7" podStartSLOduration=3.817004884 podStartE2EDuration="10.235558943s" podCreationTimestamp="2025-11-24 17:17:19 +0000 UTC" firstStartedPulling="2025-11-24 17:17:22.117395981 +0000 UTC m=+1523.364364639" lastFinishedPulling="2025-11-24 17:17:28.53595004 +0000 UTC m=+1529.782918698" observedRunningTime="2025-11-24 17:17:29.23155456 +0000 UTC m=+1530.478523258" watchObservedRunningTime="2025-11-24 17:17:29.235558943 +0000 UTC m=+1530.482527631" Nov 24 17:17:29 crc kubenswrapper[4768]: I1124 17:17:29.947480 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:29 crc kubenswrapper[4768]: I1124 17:17:29.947771 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:31 crc kubenswrapper[4768]: I1124 17:17:31.005734 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nlcs7" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="registry-server" probeResult="failure" output=< Nov 24 17:17:31 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Nov 24 17:17:31 crc kubenswrapper[4768]: > Nov 24 17:17:32 crc kubenswrapper[4768]: I1124 17:17:32.044834 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gkks2"] Nov 24 17:17:32 crc kubenswrapper[4768]: I1124 17:17:32.052873 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gkks2"] Nov 24 17:17:33 crc kubenswrapper[4768]: I1124 17:17:33.596242 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b577a7-e026-4976-8737-8d103f7b2c7b" path="/var/lib/kubelet/pods/f8b577a7-e026-4976-8737-8d103f7b2c7b/volumes" Nov 24 17:17:39 crc kubenswrapper[4768]: I1124 17:17:39.991977 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:40 crc kubenswrapper[4768]: I1124 17:17:40.042751 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:40 crc kubenswrapper[4768]: I1124 17:17:40.235104 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.329847 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlcs7" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="registry-server" containerID="cri-o://bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404" gracePeriod=2 Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.825109 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.934620 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities\") pod \"430e0fa5-2027-43dc-be80-d6ee9f82d380\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.934734 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntp8j\" (UniqueName: \"kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j\") pod \"430e0fa5-2027-43dc-be80-d6ee9f82d380\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.934841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content\") pod \"430e0fa5-2027-43dc-be80-d6ee9f82d380\" (UID: \"430e0fa5-2027-43dc-be80-d6ee9f82d380\") " Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.937101 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities" (OuterVolumeSpecName: "utilities") pod "430e0fa5-2027-43dc-be80-d6ee9f82d380" (UID: "430e0fa5-2027-43dc-be80-d6ee9f82d380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.940181 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j" (OuterVolumeSpecName: "kube-api-access-ntp8j") pod "430e0fa5-2027-43dc-be80-d6ee9f82d380" (UID: "430e0fa5-2027-43dc-be80-d6ee9f82d380"). InnerVolumeSpecName "kube-api-access-ntp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:17:41 crc kubenswrapper[4768]: I1124 17:17:41.987918 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "430e0fa5-2027-43dc-be80-d6ee9f82d380" (UID: "430e0fa5-2027-43dc-be80-d6ee9f82d380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.037011 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.037255 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntp8j\" (UniqueName: \"kubernetes.io/projected/430e0fa5-2027-43dc-be80-d6ee9f82d380-kube-api-access-ntp8j\") on node \"crc\" DevicePath \"\"" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.037312 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/430e0fa5-2027-43dc-be80-d6ee9f82d380-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.047709 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-d9f4t"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.058504 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-d9f4t"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.068979 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7c6d-account-create-76dsd"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.076628 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-ede5-account-create-4bcf7"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.085223 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7c6d-account-create-76dsd"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.092383 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-ede5-account-create-4bcf7"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.099577 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-698c-account-create-kdm74"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.106501 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-698c-account-create-kdm74"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.114588 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6jln2"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.121544 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6jln2"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.339010 4768 generic.go:334] "Generic (PLEG): container finished" podID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerID="bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404" exitCode=0 Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.339071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerDied","Data":"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404"} Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.339136 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlcs7" event={"ID":"430e0fa5-2027-43dc-be80-d6ee9f82d380","Type":"ContainerDied","Data":"f0f9c23b43f64b0f5b024804db5c6ed2bd801954ea0b43dc0322533a253898ba"} Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.339159 4768 scope.go:117] "RemoveContainer" containerID="bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.339775 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlcs7" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.363249 4768 scope.go:117] "RemoveContainer" containerID="54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.382765 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.397165 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlcs7"] Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.409058 4768 scope.go:117] "RemoveContainer" containerID="f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.432224 4768 scope.go:117] "RemoveContainer" containerID="bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404" Nov 24 17:17:42 crc kubenswrapper[4768]: E1124 17:17:42.432880 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404\": container with ID starting with bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404 not found: ID does not exist" containerID="bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.432918 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404"} err="failed to get container status \"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404\": rpc error: code = NotFound desc = could not find container \"bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404\": container with ID starting with bda0676375d7ce02561ad60d8799fac20fb6aaef10f460dc2c80e8a9aeb19404 not found: ID does not exist" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.432946 4768 scope.go:117] "RemoveContainer" containerID="54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096" Nov 24 17:17:42 crc kubenswrapper[4768]: E1124 17:17:42.433262 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096\": container with ID starting with 54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096 not found: ID does not exist" containerID="54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.433290 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096"} err="failed to get container status \"54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096\": rpc error: code = NotFound desc = could not find container \"54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096\": container with ID starting with 54a345d8bd388fcfc1e971992aaea151ca200a3e5cbb2d1344183fc7bfa7c096 not found: ID does not exist" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.433308 4768 scope.go:117] "RemoveContainer" containerID="f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a" Nov 24 17:17:42 crc kubenswrapper[4768]: E1124 17:17:42.433634 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a\": container with ID starting with f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a not found: ID does not exist" containerID="f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a" Nov 24 17:17:42 crc kubenswrapper[4768]: I1124 17:17:42.433664 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a"} err="failed to get container status \"f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a\": rpc error: code = NotFound desc = could not find container \"f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a\": container with ID starting with f5ff624ff5488d4b7f156d2d3e217a85a88a6597cb257c396a60c06940e7ba3a not found: ID does not exist" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.028286 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-q2dwl"] Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.036429 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-q2dwl"] Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.598169 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" path="/var/lib/kubelet/pods/430e0fa5-2027-43dc-be80-d6ee9f82d380/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.599978 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5acf409c-da92-42ec-982a-b2d1f34be104" path="/var/lib/kubelet/pods/5acf409c-da92-42ec-982a-b2d1f34be104/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.601482 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f82633b-8229-49c8-92f3-4bffcd57f7ba" path="/var/lib/kubelet/pods/5f82633b-8229-49c8-92f3-4bffcd57f7ba/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.603914 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607cc9c4-da33-4557-b019-18efe88914f5" path="/var/lib/kubelet/pods/607cc9c4-da33-4557-b019-18efe88914f5/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.606986 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7d2e95-2ad1-47c3-a97c-65d3821742b3" path="/var/lib/kubelet/pods/7b7d2e95-2ad1-47c3-a97c-65d3821742b3/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.609418 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8229b191-56fe-4e64-8a62-1213c86a792c" path="/var/lib/kubelet/pods/8229b191-56fe-4e64-8a62-1213c86a792c/volumes" Nov 24 17:17:43 crc kubenswrapper[4768]: I1124 17:17:43.611785 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53c068b-3ea6-4b03-a740-e296a2f3f7e0" path="/var/lib/kubelet/pods/e53c068b-3ea6-4b03-a740-e296a2f3f7e0/volumes" Nov 24 17:17:54 crc kubenswrapper[4768]: I1124 17:17:54.038983 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2cnhg"] Nov 24 17:17:54 crc kubenswrapper[4768]: I1124 17:17:54.046545 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2cnhg"] Nov 24 17:17:55 crc kubenswrapper[4768]: I1124 17:17:55.602245 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006943d1-b308-4fb2-8af1-b54310ff2deb" path="/var/lib/kubelet/pods/006943d1-b308-4fb2-8af1-b54310ff2deb/volumes" Nov 24 17:18:00 crc kubenswrapper[4768]: I1124 17:18:00.048783 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-c555-account-create-x2sqr"] Nov 24 17:18:00 crc kubenswrapper[4768]: I1124 17:18:00.057566 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-zcgvl"] Nov 24 17:18:00 crc kubenswrapper[4768]: I1124 17:18:00.066212 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-c555-account-create-x2sqr"] Nov 24 17:18:00 crc kubenswrapper[4768]: I1124 17:18:00.075505 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-zcgvl"] Nov 24 17:18:01 crc kubenswrapper[4768]: I1124 17:18:01.590937 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5ab834-a98f-4ace-a22f-cde15ebf7f4b" path="/var/lib/kubelet/pods/2b5ab834-a98f-4ace-a22f-cde15ebf7f4b/volumes" Nov 24 17:18:01 crc kubenswrapper[4768]: I1124 17:18:01.591820 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8334061-2f24-4d34-a921-10d05dd32ec7" path="/var/lib/kubelet/pods/e8334061-2f24-4d34-a921-10d05dd32ec7/volumes" Nov 24 17:18:22 crc kubenswrapper[4768]: I1124 17:18:22.042026 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cntsw"] Nov 24 17:18:22 crc kubenswrapper[4768]: I1124 17:18:22.054920 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cntsw"] Nov 24 17:18:23 crc kubenswrapper[4768]: I1124 17:18:23.618729 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b5f872-0b2d-4937-bc26-dac18713087f" path="/var/lib/kubelet/pods/14b5f872-0b2d-4937-bc26-dac18713087f/volumes" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.255761 4768 scope.go:117] "RemoveContainer" containerID="423e81fee4762b3ef15b48c3fb59a65136bf15c6b1423b40632346b8636a4462" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.308156 4768 scope.go:117] "RemoveContainer" containerID="d825fb67a0cde5008343df7fb1b4b6d8fdc27806f37a6d0e4aca0a8c671190df" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.331517 4768 scope.go:117] "RemoveContainer" containerID="418266e40ed8e2d5bf7ffc2e7dbc0319643b345e815b68386c6847a7fbada2b2" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.389903 4768 scope.go:117] "RemoveContainer" containerID="4434652a1b4744b77d711b68b3dd8b8e4245cce746916904bcb638fcf3c65a47" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.438742 4768 scope.go:117] "RemoveContainer" containerID="56ae578635b0020b061a390eed7444930d4cc1631910d6128afd196e89ce4f2a" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.499297 4768 scope.go:117] "RemoveContainer" containerID="713baed01a67c2d3ed923ff5fd48259de70ec823325ba09c324af3a92924ae23" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.537318 4768 scope.go:117] "RemoveContainer" containerID="947367c61d4f4f77a219bccbc69d51031bb44c69cdb44db4244997b7f7ae8e23" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.574592 4768 scope.go:117] "RemoveContainer" containerID="d06a5f55c14e469642212a9d801511ce728cc2a49fbaf94f476c90d880656dea" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.595387 4768 scope.go:117] "RemoveContainer" containerID="a0aeeeb1b45f605a1fe9f36b1d8f7305e727e5c21ab06523d10f0e2164965154" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.619721 4768 scope.go:117] "RemoveContainer" containerID="c0b186d9e208b809daec273f120a2e47feea0d97e7789b0fd792e936e59f4a3a" Nov 24 17:18:26 crc kubenswrapper[4768]: I1124 17:18:26.654610 4768 scope.go:117] "RemoveContainer" containerID="3d47d19127ffafb2575aff54155d061f52c9bd4fcd67da496d0aab96738c3e28" Nov 24 17:18:32 crc kubenswrapper[4768]: I1124 17:18:32.039181 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-v5vq6"] Nov 24 17:18:32 crc kubenswrapper[4768]: I1124 17:18:32.052209 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-v5vq6"] Nov 24 17:18:32 crc kubenswrapper[4768]: I1124 17:18:32.060761 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rdt7d"] Nov 24 17:18:32 crc kubenswrapper[4768]: I1124 17:18:32.068496 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rdt7d"] Nov 24 17:18:33 crc kubenswrapper[4768]: I1124 17:18:33.598986 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b29f36-8738-4aff-b55f-9bf0ce77e344" path="/var/lib/kubelet/pods/07b29f36-8738-4aff-b55f-9bf0ce77e344/volumes" Nov 24 17:18:33 crc kubenswrapper[4768]: I1124 17:18:33.600715 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25ecf7c-a4b8-40e9-97b1-2b52c3094474" path="/var/lib/kubelet/pods/a25ecf7c-a4b8-40e9-97b1-2b52c3094474/volumes" Nov 24 17:18:41 crc kubenswrapper[4768]: I1124 17:18:41.029622 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6qc9l"] Nov 24 17:18:41 crc kubenswrapper[4768]: I1124 17:18:41.039686 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6qc9l"] Nov 24 17:18:41 crc kubenswrapper[4768]: I1124 17:18:41.594157 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d73241-8027-4861-83ae-a766feceadd2" path="/var/lib/kubelet/pods/56d73241-8027-4861-83ae-a766feceadd2/volumes" Nov 24 17:18:45 crc kubenswrapper[4768]: I1124 17:18:45.031808 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-w8vsq"] Nov 24 17:18:45 crc kubenswrapper[4768]: I1124 17:18:45.046755 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-w8vsq"] Nov 24 17:18:45 crc kubenswrapper[4768]: I1124 17:18:45.601677 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632b579a-27e1-4431-a7ad-32631cf804b6" path="/var/lib/kubelet/pods/632b579a-27e1-4431-a7ad-32631cf804b6/volumes" Nov 24 17:18:56 crc kubenswrapper[4768]: I1124 17:18:56.051859 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-mpnzf"] Nov 24 17:18:56 crc kubenswrapper[4768]: I1124 17:18:56.068623 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-27ba-account-create-pcz7v"] Nov 24 17:18:56 crc kubenswrapper[4768]: I1124 17:18:56.083480 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-mpnzf"] Nov 24 17:18:56 crc kubenswrapper[4768]: I1124 17:18:56.092891 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-27ba-account-create-pcz7v"] Nov 24 17:18:57 crc kubenswrapper[4768]: I1124 17:18:57.597194 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6be468-91c4-4bd5-8f6c-54396782c17f" path="/var/lib/kubelet/pods/bd6be468-91c4-4bd5-8f6c-54396782c17f/volumes" Nov 24 17:18:57 crc kubenswrapper[4768]: I1124 17:18:57.599149 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75aecba-ed47-439f-80f3-3e435c38a8c6" path="/var/lib/kubelet/pods/f75aecba-ed47-439f-80f3-3e435c38a8c6/volumes" Nov 24 17:19:04 crc kubenswrapper[4768]: I1124 17:19:04.892812 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:19:04 crc kubenswrapper[4768]: I1124 17:19:04.893444 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:19:26 crc kubenswrapper[4768]: I1124 17:19:26.935908 4768 scope.go:117] "RemoveContainer" containerID="105f280ac06601dc5642a7a91bf4424487b9ed12801541053ef2ce3dce0e5b9a" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.000942 4768 scope.go:117] "RemoveContainer" containerID="6c1ccef1f6f0fff3036ea6cddb7db4339f3ec232a3476a7e67984b2c5ac696fc" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.074150 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5871-account-create-rxhx5"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.077848 4768 scope.go:117] "RemoveContainer" containerID="f81dda92d7320acf88a0d118934af6ca1fa5430ab04223544b5a0183de5f4ec7" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.086975 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5871-account-create-rxhx5"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.096634 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-68lhn"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.106744 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-68lhn"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.115100 4768 scope.go:117] "RemoveContainer" containerID="8d483905d884fd224c5eeba6e6fc981bf2d20bc585c08c6ea52636b3a277423b" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.115864 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zzwz5"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.126273 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z8lj9"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.134693 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9aef-account-create-mjnp6"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.144689 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z8lj9"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.151334 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9aef-account-create-mjnp6"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.158550 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zzwz5"] Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.190666 4768 scope.go:117] "RemoveContainer" containerID="463792b969d5018114a7b1086ada91bc0bf823e4022091dd4fb5552de922a214" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.215270 4768 scope.go:117] "RemoveContainer" containerID="7b9aca73978f92a37a42dea1c3ada1057ff5ba25f851b9858677fcf99e249ffd" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.591386 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67d150e1-af0e-45d5-b366-e9e550d7457a" path="/var/lib/kubelet/pods/67d150e1-af0e-45d5-b366-e9e550d7457a/volumes" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.592542 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8" path="/var/lib/kubelet/pods/6b5e8a5b-e142-4f3e-95ea-f5c0c9e460b8/volumes" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.593078 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83837a2f-936f-4af4-b223-b3e109491af4" path="/var/lib/kubelet/pods/83837a2f-936f-4af4-b223-b3e109491af4/volumes" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.593635 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c522f8-072d-496b-936e-0a692e3c1149" path="/var/lib/kubelet/pods/87c522f8-072d-496b-936e-0a692e3c1149/volumes" Nov 24 17:19:27 crc kubenswrapper[4768]: I1124 17:19:27.594707 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd60bb7-f834-43fd-9758-842ebcf0fc3b" path="/var/lib/kubelet/pods/bfd60bb7-f834-43fd-9758-842ebcf0fc3b/volumes" Nov 24 17:19:28 crc kubenswrapper[4768]: I1124 17:19:28.026542 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9620-account-create-2k958"] Nov 24 17:19:28 crc kubenswrapper[4768]: I1124 17:19:28.033027 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9620-account-create-2k958"] Nov 24 17:19:29 crc kubenswrapper[4768]: I1124 17:19:29.592818 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fbe0e3-4301-4a77-b1a1-ef966b69f21b" path="/var/lib/kubelet/pods/17fbe0e3-4301-4a77-b1a1-ef966b69f21b/volumes" Nov 24 17:19:34 crc kubenswrapper[4768]: I1124 17:19:34.893500 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:19:34 crc kubenswrapper[4768]: I1124 17:19:34.894108 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:20:04 crc kubenswrapper[4768]: I1124 17:20:04.893287 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:20:04 crc kubenswrapper[4768]: I1124 17:20:04.893909 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:20:04 crc kubenswrapper[4768]: I1124 17:20:04.893949 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:20:04 crc kubenswrapper[4768]: I1124 17:20:04.894653 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:20:04 crc kubenswrapper[4768]: I1124 17:20:04.894700 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" gracePeriod=600 Nov 24 17:20:05 crc kubenswrapper[4768]: E1124 17:20:05.032233 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:05 crc kubenswrapper[4768]: I1124 17:20:05.857200 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" exitCode=0 Nov 24 17:20:05 crc kubenswrapper[4768]: I1124 17:20:05.857246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c"} Nov 24 17:20:05 crc kubenswrapper[4768]: I1124 17:20:05.857279 4768 scope.go:117] "RemoveContainer" containerID="3b9927303786853c5de5f1aba770c6858638a08a624bec5c7cfe0a60dd91f385" Nov 24 17:20:05 crc kubenswrapper[4768]: I1124 17:20:05.857913 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:20:05 crc kubenswrapper[4768]: E1124 17:20:05.858226 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:06 crc kubenswrapper[4768]: I1124 17:20:06.037445 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mx2x"] Nov 24 17:20:06 crc kubenswrapper[4768]: I1124 17:20:06.048207 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6mx2x"] Nov 24 17:20:07 crc kubenswrapper[4768]: I1124 17:20:07.595204 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ae2678-f257-4fe9-b15c-72c7171320ad" path="/var/lib/kubelet/pods/63ae2678-f257-4fe9-b15c-72c7171320ad/volumes" Nov 24 17:20:17 crc kubenswrapper[4768]: I1124 17:20:17.582466 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:20:17 crc kubenswrapper[4768]: E1124 17:20:17.583527 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.362247 4768 scope.go:117] "RemoveContainer" containerID="8a50858fd1016bbb3d1597ddc8aa339facd09512dd17c081bf1c6a43daf7f13c" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.389557 4768 scope.go:117] "RemoveContainer" containerID="35936d586df799b245c9788dea50273a5ab2b119148280f5e12f3f040c50ee8c" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.472093 4768 scope.go:117] "RemoveContainer" containerID="9d0d72f00e8abcd148485c1cfa0e2496dc121d3e39fe5edd96e92820ed489184" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.501827 4768 scope.go:117] "RemoveContainer" containerID="9b39fb665310ba9e8722b968f15960332d4a4f4db8ead5a1de7d392565bda217" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.558877 4768 scope.go:117] "RemoveContainer" containerID="eae7a35038c5ac7ad834830f1ee390a0ccd282fdebcb761475a406e7153fffc5" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.598526 4768 scope.go:117] "RemoveContainer" containerID="dfa5ef62f68844084fb9eaae8dc0f4ff331883249b3a6f39aabfc4d2645e6b44" Nov 24 17:20:27 crc kubenswrapper[4768]: I1124 17:20:27.640618 4768 scope.go:117] "RemoveContainer" containerID="eee42d8c2288eb60caf89af5348b3c4be5bf946dfd307099c59485aa5a431567" Nov 24 17:20:28 crc kubenswrapper[4768]: I1124 17:20:28.581963 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:20:28 crc kubenswrapper[4768]: E1124 17:20:28.582794 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:29 crc kubenswrapper[4768]: I1124 17:20:29.039964 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdb22"] Nov 24 17:20:29 crc kubenswrapper[4768]: I1124 17:20:29.049203 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qdb22"] Nov 24 17:20:29 crc kubenswrapper[4768]: I1124 17:20:29.592284 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f88274f-db1f-4ab0-88bf-12a230c0c5e6" path="/var/lib/kubelet/pods/8f88274f-db1f-4ab0-88bf-12a230c0c5e6/volumes" Nov 24 17:20:30 crc kubenswrapper[4768]: I1124 17:20:30.023749 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qhvz6"] Nov 24 17:20:30 crc kubenswrapper[4768]: I1124 17:20:30.033019 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qhvz6"] Nov 24 17:20:31 crc kubenswrapper[4768]: I1124 17:20:31.592985 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd9e2e0-6b33-444a-a253-1d4e75a13681" path="/var/lib/kubelet/pods/bfd9e2e0-6b33-444a-a253-1d4e75a13681/volumes" Nov 24 17:20:39 crc kubenswrapper[4768]: I1124 17:20:39.586549 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:20:39 crc kubenswrapper[4768]: E1124 17:20:39.587269 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.923530 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhddh/must-gather-7c9fk"] Nov 24 17:20:48 crc kubenswrapper[4768]: E1124 17:20:48.924589 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="extract-utilities" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.924608 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="extract-utilities" Nov 24 17:20:48 crc kubenswrapper[4768]: E1124 17:20:48.924627 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="registry-server" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.924634 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="registry-server" Nov 24 17:20:48 crc kubenswrapper[4768]: E1124 17:20:48.924660 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="extract-content" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.924667 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="extract-content" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.924909 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="430e0fa5-2027-43dc-be80-d6ee9f82d380" containerName="registry-server" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.926114 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.931250 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhddh"/"kube-root-ca.crt" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.931585 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qhddh"/"default-dockercfg-xsksw" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.931763 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qhddh"/"openshift-service-ca.crt" Nov 24 17:20:48 crc kubenswrapper[4768]: I1124 17:20:48.989121 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhddh/must-gather-7c9fk"] Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.008566 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwds6\" (UniqueName: \"kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.008709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.109317 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwds6\" (UniqueName: \"kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.109442 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.109911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.136010 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwds6\" (UniqueName: \"kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6\") pod \"must-gather-7c9fk\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.276997 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.733820 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qhddh/must-gather-7c9fk"] Nov 24 17:20:49 crc kubenswrapper[4768]: I1124 17:20:49.761150 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:20:50 crc kubenswrapper[4768]: I1124 17:20:50.265814 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/must-gather-7c9fk" event={"ID":"bd9c8e50-a7e3-49c9-a2dc-626d5324539a","Type":"ContainerStarted","Data":"8f4d4db6b675ec1c2afff860ea5b75ba5d7c10722cbd6ca269638dec93c68077"} Nov 24 17:20:53 crc kubenswrapper[4768]: I1124 17:20:53.581687 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:20:53 crc kubenswrapper[4768]: E1124 17:20:53.582212 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:20:56 crc kubenswrapper[4768]: I1124 17:20:56.320950 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/must-gather-7c9fk" event={"ID":"bd9c8e50-a7e3-49c9-a2dc-626d5324539a","Type":"ContainerStarted","Data":"c405567f39b0f40d3a7657636ff3146a97afc4cc965bf28cd6ea78ee3d8fbc49"} Nov 24 17:20:56 crc kubenswrapper[4768]: I1124 17:20:56.321557 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/must-gather-7c9fk" event={"ID":"bd9c8e50-a7e3-49c9-a2dc-626d5324539a","Type":"ContainerStarted","Data":"f6dee8f6402463803d2c05da04a7e555ae851dee153adfcdd480683a86671f23"} Nov 24 17:20:56 crc kubenswrapper[4768]: I1124 17:20:56.342604 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhddh/must-gather-7c9fk" podStartSLOduration=2.685368323 podStartE2EDuration="8.342585048s" podCreationTimestamp="2025-11-24 17:20:48 +0000 UTC" firstStartedPulling="2025-11-24 17:20:49.760953305 +0000 UTC m=+1731.007921963" lastFinishedPulling="2025-11-24 17:20:55.41817003 +0000 UTC m=+1736.665138688" observedRunningTime="2025-11-24 17:20:56.332187285 +0000 UTC m=+1737.579155943" watchObservedRunningTime="2025-11-24 17:20:56.342585048 +0000 UTC m=+1737.589553706" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.324765 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhddh/crc-debug-cjv8v"] Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.327384 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.409936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx4h\" (UniqueName: \"kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.410108 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.511655 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chx4h\" (UniqueName: \"kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.511779 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.511850 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.528891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chx4h\" (UniqueName: \"kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h\") pod \"crc-debug-cjv8v\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:20:59 crc kubenswrapper[4768]: I1124 17:20:59.648927 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:21:00 crc kubenswrapper[4768]: I1124 17:21:00.355718 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" event={"ID":"c505583e-8362-4995-830c-afd04a8e8705","Type":"ContainerStarted","Data":"ebf07115044a70e64c70ca15e60713ce3d664d4878c9fb83fce68e0b80b4b700"} Nov 24 17:21:08 crc kubenswrapper[4768]: I1124 17:21:08.580731 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:21:08 crc kubenswrapper[4768]: E1124 17:21:08.581494 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:21:12 crc kubenswrapper[4768]: I1124 17:21:12.037870 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6hqpk"] Nov 24 17:21:12 crc kubenswrapper[4768]: I1124 17:21:12.045276 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6hqpk"] Nov 24 17:21:12 crc kubenswrapper[4768]: I1124 17:21:12.475746 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" event={"ID":"c505583e-8362-4995-830c-afd04a8e8705","Type":"ContainerStarted","Data":"9e2a2d4dff517cf127338fd9171de8d0316d7ca5bf8c3d92a367eacfd2c08438"} Nov 24 17:21:12 crc kubenswrapper[4768]: I1124 17:21:12.497049 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" podStartSLOduration=1.559499325 podStartE2EDuration="13.49702845s" podCreationTimestamp="2025-11-24 17:20:59 +0000 UTC" firstStartedPulling="2025-11-24 17:20:59.68153004 +0000 UTC m=+1740.928498698" lastFinishedPulling="2025-11-24 17:21:11.619059165 +0000 UTC m=+1752.866027823" observedRunningTime="2025-11-24 17:21:12.493596093 +0000 UTC m=+1753.740564751" watchObservedRunningTime="2025-11-24 17:21:12.49702845 +0000 UTC m=+1753.743997108" Nov 24 17:21:13 crc kubenswrapper[4768]: I1124 17:21:13.592844 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9" path="/var/lib/kubelet/pods/1c79e6bf-ae03-4b73-9e78-d55aa9e05cd9/volumes" Nov 24 17:21:20 crc kubenswrapper[4768]: I1124 17:21:20.580498 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:21:20 crc kubenswrapper[4768]: E1124 17:21:20.582304 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:21:27 crc kubenswrapper[4768]: I1124 17:21:27.780305 4768 scope.go:117] "RemoveContainer" containerID="f6779e2c110cc57a2aea1551e82bd04afb6cc30a7fa2816b593b46c271eeaa25" Nov 24 17:21:28 crc kubenswrapper[4768]: I1124 17:21:28.083732 4768 scope.go:117] "RemoveContainer" containerID="dfd7d81cd6a5b8d2d1ae543a561e8e296c666380d0962e47e63eeb5721135f26" Nov 24 17:21:28 crc kubenswrapper[4768]: I1124 17:21:28.141726 4768 scope.go:117] "RemoveContainer" containerID="b103acf9ad0f3d26f8f85be888daf5cac6a3f66a63e1328fae01d08553aa855d" Nov 24 17:21:35 crc kubenswrapper[4768]: I1124 17:21:35.581110 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:21:35 crc kubenswrapper[4768]: E1124 17:21:35.582237 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:21:48 crc kubenswrapper[4768]: I1124 17:21:48.582101 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:21:48 crc kubenswrapper[4768]: E1124 17:21:48.583103 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:01 crc kubenswrapper[4768]: I1124 17:22:01.045085 4768 generic.go:334] "Generic (PLEG): container finished" podID="c505583e-8362-4995-830c-afd04a8e8705" containerID="9e2a2d4dff517cf127338fd9171de8d0316d7ca5bf8c3d92a367eacfd2c08438" exitCode=0 Nov 24 17:22:01 crc kubenswrapper[4768]: I1124 17:22:01.045178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" event={"ID":"c505583e-8362-4995-830c-afd04a8e8705","Type":"ContainerDied","Data":"9e2a2d4dff517cf127338fd9171de8d0316d7ca5bf8c3d92a367eacfd2c08438"} Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.152515 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.183653 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-cjv8v"] Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.190725 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-cjv8v"] Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.338740 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chx4h\" (UniqueName: \"kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h\") pod \"c505583e-8362-4995-830c-afd04a8e8705\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.338835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host\") pod \"c505583e-8362-4995-830c-afd04a8e8705\" (UID: \"c505583e-8362-4995-830c-afd04a8e8705\") " Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.339247 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host" (OuterVolumeSpecName: "host") pod "c505583e-8362-4995-830c-afd04a8e8705" (UID: "c505583e-8362-4995-830c-afd04a8e8705"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.339631 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c505583e-8362-4995-830c-afd04a8e8705-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.345740 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h" (OuterVolumeSpecName: "kube-api-access-chx4h") pod "c505583e-8362-4995-830c-afd04a8e8705" (UID: "c505583e-8362-4995-830c-afd04a8e8705"). InnerVolumeSpecName "kube-api-access-chx4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.441400 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chx4h\" (UniqueName: \"kubernetes.io/projected/c505583e-8362-4995-830c-afd04a8e8705-kube-api-access-chx4h\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:02 crc kubenswrapper[4768]: I1124 17:22:02.581980 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:22:02 crc kubenswrapper[4768]: E1124 17:22:02.582460 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.070015 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf07115044a70e64c70ca15e60713ce3d664d4878c9fb83fce68e0b80b4b700" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.070091 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-cjv8v" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.356012 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhddh/crc-debug-qgb7w"] Nov 24 17:22:03 crc kubenswrapper[4768]: E1124 17:22:03.357795 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c505583e-8362-4995-830c-afd04a8e8705" containerName="container-00" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.357897 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c505583e-8362-4995-830c-afd04a8e8705" containerName="container-00" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.358182 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c505583e-8362-4995-830c-afd04a8e8705" containerName="container-00" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.359097 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.464251 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zfm\" (UniqueName: \"kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.464326 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.566086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zfm\" (UniqueName: \"kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.566161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.566330 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.587151 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zfm\" (UniqueName: \"kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm\") pod \"crc-debug-qgb7w\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.599040 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c505583e-8362-4995-830c-afd04a8e8705" path="/var/lib/kubelet/pods/c505583e-8362-4995-830c-afd04a8e8705/volumes" Nov 24 17:22:03 crc kubenswrapper[4768]: I1124 17:22:03.680769 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:03 crc kubenswrapper[4768]: W1124 17:22:03.721076 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b295c74_1421_4265_9277_99325ea0ca37.slice/crio-7a97387988fb9d191d28916f137fe27ba2830cf8a9e63a98b0f2c04c1d7c4af1 WatchSource:0}: Error finding container 7a97387988fb9d191d28916f137fe27ba2830cf8a9e63a98b0f2c04c1d7c4af1: Status 404 returned error can't find the container with id 7a97387988fb9d191d28916f137fe27ba2830cf8a9e63a98b0f2c04c1d7c4af1 Nov 24 17:22:04 crc kubenswrapper[4768]: I1124 17:22:04.080663 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b295c74-1421-4265-9277-99325ea0ca37" containerID="77d88f1fa03025633d11eea71044a9918951f3a1d315d6af464a302f338fa52a" exitCode=0 Nov 24 17:22:04 crc kubenswrapper[4768]: I1124 17:22:04.080759 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" event={"ID":"0b295c74-1421-4265-9277-99325ea0ca37","Type":"ContainerDied","Data":"77d88f1fa03025633d11eea71044a9918951f3a1d315d6af464a302f338fa52a"} Nov 24 17:22:04 crc kubenswrapper[4768]: I1124 17:22:04.081008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" event={"ID":"0b295c74-1421-4265-9277-99325ea0ca37","Type":"ContainerStarted","Data":"7a97387988fb9d191d28916f137fe27ba2830cf8a9e63a98b0f2c04c1d7c4af1"} Nov 24 17:22:04 crc kubenswrapper[4768]: I1124 17:22:04.572244 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-qgb7w"] Nov 24 17:22:04 crc kubenswrapper[4768]: I1124 17:22:04.578605 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-qgb7w"] Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.200243 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.299881 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6zfm\" (UniqueName: \"kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm\") pod \"0b295c74-1421-4265-9277-99325ea0ca37\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.300091 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host\") pod \"0b295c74-1421-4265-9277-99325ea0ca37\" (UID: \"0b295c74-1421-4265-9277-99325ea0ca37\") " Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.300300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host" (OuterVolumeSpecName: "host") pod "0b295c74-1421-4265-9277-99325ea0ca37" (UID: "0b295c74-1421-4265-9277-99325ea0ca37"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.300808 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b295c74-1421-4265-9277-99325ea0ca37-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.306628 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm" (OuterVolumeSpecName: "kube-api-access-j6zfm") pod "0b295c74-1421-4265-9277-99325ea0ca37" (UID: "0b295c74-1421-4265-9277-99325ea0ca37"). InnerVolumeSpecName "kube-api-access-j6zfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.402658 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zfm\" (UniqueName: \"kubernetes.io/projected/0b295c74-1421-4265-9277-99325ea0ca37-kube-api-access-j6zfm\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.602554 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b295c74-1421-4265-9277-99325ea0ca37" path="/var/lib/kubelet/pods/0b295c74-1421-4265-9277-99325ea0ca37/volumes" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.762206 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qhddh/crc-debug-hs4jq"] Nov 24 17:22:05 crc kubenswrapper[4768]: E1124 17:22:05.762655 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b295c74-1421-4265-9277-99325ea0ca37" containerName="container-00" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.762678 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b295c74-1421-4265-9277-99325ea0ca37" containerName="container-00" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.762932 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b295c74-1421-4265-9277-99325ea0ca37" containerName="container-00" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.763715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.914375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:05 crc kubenswrapper[4768]: I1124 17:22:05.914534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmsls\" (UniqueName: \"kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.016172 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmsls\" (UniqueName: \"kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.016638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.016733 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.032590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmsls\" (UniqueName: \"kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls\") pod \"crc-debug-hs4jq\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.084017 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.103469 4768 scope.go:117] "RemoveContainer" containerID="77d88f1fa03025633d11eea71044a9918951f3a1d315d6af464a302f338fa52a" Nov 24 17:22:06 crc kubenswrapper[4768]: I1124 17:22:06.103495 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-qgb7w" Nov 24 17:22:06 crc kubenswrapper[4768]: W1124 17:22:06.129551 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7cbcfbc_59b6_4bc0_88da_9bd16f0ad59b.slice/crio-d861a67d5e1b4bd24bf582709abcf8ca5d0eb6382cf160fb7f8821ac39a12bb0 WatchSource:0}: Error finding container d861a67d5e1b4bd24bf582709abcf8ca5d0eb6382cf160fb7f8821ac39a12bb0: Status 404 returned error can't find the container with id d861a67d5e1b4bd24bf582709abcf8ca5d0eb6382cf160fb7f8821ac39a12bb0 Nov 24 17:22:07 crc kubenswrapper[4768]: I1124 17:22:07.118772 4768 generic.go:334] "Generic (PLEG): container finished" podID="b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" containerID="128182d236c5f0b8366d3ef254891d3423eaf5c17d2a3684e6662c63af189742" exitCode=0 Nov 24 17:22:07 crc kubenswrapper[4768]: I1124 17:22:07.118802 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" event={"ID":"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b","Type":"ContainerDied","Data":"128182d236c5f0b8366d3ef254891d3423eaf5c17d2a3684e6662c63af189742"} Nov 24 17:22:07 crc kubenswrapper[4768]: I1124 17:22:07.120634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" event={"ID":"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b","Type":"ContainerStarted","Data":"d861a67d5e1b4bd24bf582709abcf8ca5d0eb6382cf160fb7f8821ac39a12bb0"} Nov 24 17:22:07 crc kubenswrapper[4768]: I1124 17:22:07.171434 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-hs4jq"] Nov 24 17:22:07 crc kubenswrapper[4768]: I1124 17:22:07.181247 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhddh/crc-debug-hs4jq"] Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.252520 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.362574 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmsls\" (UniqueName: \"kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls\") pod \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.362649 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host\") pod \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\" (UID: \"b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b\") " Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.362727 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host" (OuterVolumeSpecName: "host") pod "b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" (UID: "b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.363392 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.367857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls" (OuterVolumeSpecName: "kube-api-access-cmsls") pod "b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" (UID: "b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b"). InnerVolumeSpecName "kube-api-access-cmsls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:22:08 crc kubenswrapper[4768]: I1124 17:22:08.464764 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmsls\" (UniqueName: \"kubernetes.io/projected/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b-kube-api-access-cmsls\") on node \"crc\" DevicePath \"\"" Nov 24 17:22:09 crc kubenswrapper[4768]: I1124 17:22:09.143214 4768 scope.go:117] "RemoveContainer" containerID="128182d236c5f0b8366d3ef254891d3423eaf5c17d2a3684e6662c63af189742" Nov 24 17:22:09 crc kubenswrapper[4768]: I1124 17:22:09.143246 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/crc-debug-hs4jq" Nov 24 17:22:09 crc kubenswrapper[4768]: I1124 17:22:09.591023 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" path="/var/lib/kubelet/pods/b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b/volumes" Nov 24 17:22:15 crc kubenswrapper[4768]: I1124 17:22:15.581416 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:22:15 crc kubenswrapper[4768]: E1124 17:22:15.582247 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:22 crc kubenswrapper[4768]: I1124 17:22:22.937717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54d9965d5d-g2r7n_0eb91316-55e3-466f-bc29-314359383931/barbican-api/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.070221 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54d9965d5d-g2r7n_0eb91316-55e3-466f-bc29-314359383931/barbican-api-log/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.104001 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fdbb4868-m84ml_758f8654-5012-43b2-a4b5-adc902722254/barbican-keystone-listener/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.228025 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fdbb4868-m84ml_758f8654-5012-43b2-a4b5-adc902722254/barbican-keystone-listener-log/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.308489 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-864dc88cf9-8c7r4_af648a4f-aca8-4b51-8650-6990ae26b259/barbican-worker/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.349271 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-864dc88cf9-8c7r4_af648a4f-aca8-4b51-8650-6990ae26b259/barbican-worker-log/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.540684 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/ceilometer-central-agent/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.561662 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/ceilometer-notification-agent/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.600159 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/proxy-httpd/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.710045 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/sg-core/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.760415 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3/cinder-api/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.814138 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3/cinder-api-log/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.949278 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3b85390f-acde-4350-8c18-1f588ffa8ab5/cinder-scheduler/0.log" Nov 24 17:22:23 crc kubenswrapper[4768]: I1124 17:22:23.991614 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3b85390f-acde-4350-8c18-1f588ffa8ab5/probe/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.096058 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/init/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.289082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/dnsmasq-dns/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.354493 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/init/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.369086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3df486b9-bc37-4240-9ed2-76dc84b54031/glance-httpd/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.552476 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3df486b9-bc37-4240-9ed2-76dc84b54031/glance-log/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.569245 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6eb8b800-a966-48fe-8075-4709302ee14d/glance-log/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.591751 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6eb8b800-a966-48fe-8075-4709302ee14d/glance-httpd/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.746101 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/init/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.894170 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/init/0.log" Nov 24 17:22:24 crc kubenswrapper[4768]: I1124 17:22:24.972552 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/ironic-api-log/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.014591 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/ironic-api/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.080191 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.314830 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.338854 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.341487 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.559409 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.662932 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:22:25 crc kubenswrapper[4768]: I1124 17:22:25.946425 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.131791 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.150873 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.403593 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/httpboot/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.581420 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:22:26 crc kubenswrapper[4768]: E1124 17:22:26.581649 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.690492 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-conductor/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.731813 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ramdisk-logs/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.894316 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.943836 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:22:26 crc kubenswrapper[4768]: I1124 17:22:26.975893 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.137743 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.188415 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/ironic-db-sync/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.243033 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.313103 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.398804 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.411461 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.427886 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.582587 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.625179 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-httpboot/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.652916 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector/1.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.657416 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector/2.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.662409 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.844532 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ramdisk-logs/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.847232 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector-httpd/0.log" Nov 24 17:22:27 crc kubenswrapper[4768]: I1124 17:22:27.904254 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-hk9hx_d39158e2-1592-48f9-ba0e-198ab1030790/ironic-inspector-db-sync/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.042589 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-cb4d89897-bnsh5_26b563bb-da9a-43fe-b201-9f77ed0d0ddd/ironic-neutron-agent/2.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.044935 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-cb4d89897-bnsh5_26b563bb-da9a-43fe-b201-9f77ed0d0ddd/ironic-neutron-agent/1.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.274454 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74667f8554-ph5sd_eff6ece5-de21-4541-96d3-7a82e5a1d789/keystone-api/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.437771 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_be754be8-e18d-4413-bf31-5258e9ad4544/kube-state-metrics/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.645634 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c9b47fdf7-ztl8b_4ffadf60-9eff-4bf9-b0bd-9480cbd0d917/neutron-httpd/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.732724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c9b47fdf7-ztl8b_4ffadf60-9eff-4bf9-b0bd-9480cbd0d917/neutron-api/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.936406 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2f9be604-c179-43ac-b565-428652071d6e/nova-api-api/0.log" Nov 24 17:22:28 crc kubenswrapper[4768]: I1124 17:22:28.941540 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2f9be604-c179-43ac-b565-428652071d6e/nova-api-log/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.063966 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aaf1fd30-6ac7-4418-93f7-cf24adacd921/nova-cell0-conductor-conductor/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.228915 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51/nova-cell1-conductor-conductor/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.371644 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0f182adb-6256-41d3-b7f0-bfa5e16965f7/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.589952 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_972962b2-f34e-4ad2-825e-2be316ce2ec3/nova-metadata-log/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.815172 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_589aaf7d-1ce5-4a36-9501-b91900237cb4/nova-scheduler-scheduler/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.881395 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/mysql-bootstrap/0.log" Nov 24 17:22:29 crc kubenswrapper[4768]: I1124 17:22:29.902913 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_972962b2-f34e-4ad2-825e-2be316ce2ec3/nova-metadata-metadata/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.053269 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/galera/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.078229 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/mysql-bootstrap/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.163767 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/mysql-bootstrap/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.372257 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/mysql-bootstrap/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.381269 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/galera/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.402671 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7541e37b-3221-4158-8d66-4682a77e8172/openstackclient/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.632065 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8j94t_df42583a-33cf-4b89-9f69-7f3baeb6e7b5/ovn-controller/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.672171 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4crs9_d668018c-aa61-4c17-9af6-f00933b4160c/openstack-network-exporter/0.log" Nov 24 17:22:30 crc kubenswrapper[4768]: I1124 17:22:30.831703 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server-init/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.042063 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.066047 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server-init/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.124998 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovs-vswitchd/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.227471 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_84f66fa0-19d0-40f2-a4d0-4ddc58101d00/openstack-network-exporter/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.303747 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_84f66fa0-19d0-40f2-a4d0-4ddc58101d00/ovn-northd/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.356545 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4ae8da1-9449-46bf-8e88-fc42708e6c53/openstack-network-exporter/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.459854 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4ae8da1-9449-46bf-8e88-fc42708e6c53/ovsdbserver-nb/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.578778 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_841a709e-ced3-499f-b13e-d0e1ff90ad11/openstack-network-exporter/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.612879 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_841a709e-ced3-499f-b13e-d0e1ff90ad11/ovsdbserver-sb/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.821174 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f546f576-kqv27_244d26f2-3748-48ba-ab9f-ba52e5ad5729/placement-api/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.877906 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f546f576-kqv27_244d26f2-3748-48ba-ab9f-ba52e5ad5729/placement-log/0.log" Nov 24 17:22:31 crc kubenswrapper[4768]: I1124 17:22:31.969752 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/setup-container/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.217725 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/setup-container/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.219280 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/setup-container/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.311419 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/rabbitmq/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.537568 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/setup-container/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.540543 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/rabbitmq/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.612927 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68997d6dc7-xqk74_6faf5c89-9071-4710-bf7a-91f8b276370b/proxy-httpd/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.758663 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68997d6dc7-xqk74_6faf5c89-9071-4710-bf7a-91f8b276370b/proxy-server/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.825327 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ks8ts_b46e54e9-1ffb-4094-a42a-0d7a86fff17c/swift-ring-rebalance/0.log" Nov 24 17:22:32 crc kubenswrapper[4768]: I1124 17:22:32.997334 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-auditor/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.051725 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-reaper/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.106680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-server/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.115974 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-replicator/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.249577 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-auditor/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.286971 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-replicator/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.294490 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-updater/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.299011 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-server/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.443788 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-auditor/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.474519 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-server/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.495224 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-expirer/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.517951 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-replicator/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.727074 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-updater/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.729363 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/swift-recon-cron/0.log" Nov 24 17:22:33 crc kubenswrapper[4768]: I1124 17:22:33.740274 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/rsync/0.log" Nov 24 17:22:36 crc kubenswrapper[4768]: I1124 17:22:36.445926 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bfe18146-b6db-422b-965f-8b22d4943e4f/memcached/0.log" Nov 24 17:22:37 crc kubenswrapper[4768]: I1124 17:22:37.580653 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:22:37 crc kubenswrapper[4768]: E1124 17:22:37.580912 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:48 crc kubenswrapper[4768]: I1124 17:22:48.580528 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:22:48 crc kubenswrapper[4768]: E1124 17:22:48.581210 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.465486 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.695734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.697825 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.701694 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.858968 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.872421 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:22:53 crc kubenswrapper[4768]: I1124 17:22:53.889470 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/extract/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.005424 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4xg49_61f1ba78-cd9d-4202-9463-f7a4c5cc9092/kube-rbac-proxy/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.102291 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4xg49_61f1ba78-cd9d-4202-9463-f7a4c5cc9092/manager/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.184929 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-gnzjb_1a60eac6-e17c-4621-9367-3d1b60aab811/kube-rbac-proxy/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.413942 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-gnzjb_1a60eac6-e17c-4621-9367-3d1b60aab811/manager/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.511435 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jdszs_f5b8ba2f-084a-4285-938b-5ffe669a9250/kube-rbac-proxy/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.599416 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jdszs_f5b8ba2f-084a-4285-938b-5ffe669a9250/manager/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.753298 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-fxzrc_db716c0e-bc96-4eaa-af75-184cd71e8124/kube-rbac-proxy/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.856930 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-fxzrc_db716c0e-bc96-4eaa-af75-184cd71e8124/manager/0.log" Nov 24 17:22:54 crc kubenswrapper[4768]: I1124 17:22:54.929261 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-6smrr_d35343f5-188c-4787-9002-125c9e597e80/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.006912 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-6smrr_d35343f5-188c-4787-9002-125c9e597e80/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.094608 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jfk9g_f7e72195-5597-498f-906e-573b0c5c8295/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.125329 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jfk9g_f7e72195-5597-498f-906e-573b0c5c8295/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.305680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-d9crw_e2835f06-b5ce-4170-a4c3-4a08e9cc2815/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.491798 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58fc45656d-mlqr9_cdfcbb97-9f2e-40ab-863a-93e592ee728a/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.547443 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-d9crw_e2835f06-b5ce-4170-a4c3-4a08e9cc2815/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.606050 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58fc45656d-mlqr9_cdfcbb97-9f2e-40ab-863a-93e592ee728a/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.696127 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zsr4q_8eff7b8e-21b1-4d9f-ac7b-bc44593394c1/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.770073 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zsr4q_8eff7b8e-21b1-4d9f-ac7b-bc44593394c1/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.815128 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-dsjtl_a718e502-d0e6-45ee-8a65-88de1381da04/kube-rbac-proxy/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.895580 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-dsjtl_a718e502-d0e6-45ee-8a65-88de1381da04/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.992151 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xv4wf_a8b9e845-7f76-4609-aef9-89d1a16c971b/manager/0.log" Nov 24 17:22:55 crc kubenswrapper[4768]: I1124 17:22:55.997785 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xv4wf_a8b9e845-7f76-4609-aef9-89d1a16c971b/kube-rbac-proxy/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.136855 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-9x7r8_2f3138aa-0515-46f5-b897-191356f55fa4/kube-rbac-proxy/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.250552 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-9x7r8_2f3138aa-0515-46f5-b897-191356f55fa4/manager/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.286214 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-9sgvb_18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d/kube-rbac-proxy/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.442108 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-9sgvb_18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d/manager/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.450830 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6nh25_8badbdc1-a611-4ada-821a-daade496a649/kube-rbac-proxy/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.526194 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6nh25_8badbdc1-a611-4ada-821a-daade496a649/manager/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.638693 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g_2020ac4a-5a4a-4c38-b667-5432dbf3d891/kube-rbac-proxy/0.log" Nov 24 17:22:56 crc kubenswrapper[4768]: I1124 17:22:56.643496 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g_2020ac4a-5a4a-4c38-b667-5432dbf3d891/manager/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.014743 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-849cb45cff-pvcvk_24c6b375-70f7-4954-9f65-4e3dcf12de68/operator/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.149082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-777rr_87ecddb5-623c-40cb-ba80-c869cea78856/registry-server/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.274858 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-v2hfk_f7c09f33-05d7-4251-930c-43d381f7f662/kube-rbac-proxy/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.469305 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-v2hfk_f7c09f33-05d7-4251-930c-43d381f7f662/manager/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.501225 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-qnqvs_e2f173d4-03f8-44b0-b05f-3dfd845569e8/manager/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.541581 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-qnqvs_e2f173d4-03f8-44b0-b05f-3dfd845569e8/kube-rbac-proxy/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.669399 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-56fcd5b457-nhnr6_920f3653-2dc6-4999-81c4-05248ca44d07/manager/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.712012 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-x24f2_27ed9b45-b076-4104-a661-bc231021ae5b/kube-rbac-proxy/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.729510 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ttgkz_9657d373-da37-4ca2-b8fe-7827bc37706f/operator/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.948019 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hvvsp_5b5647ed-7d14-4366-af99-d6d48ec2f033/kube-rbac-proxy/0.log" Nov 24 17:22:57 crc kubenswrapper[4768]: I1124 17:22:57.950539 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-x24f2_27ed9b45-b076-4104-a661-bc231021ae5b/manager/0.log" Nov 24 17:22:58 crc kubenswrapper[4768]: I1124 17:22:58.022186 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hvvsp_5b5647ed-7d14-4366-af99-d6d48ec2f033/manager/0.log" Nov 24 17:22:58 crc kubenswrapper[4768]: I1124 17:22:58.106862 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-gtc95_d40b5804-6340-4be6-8da4-dca19827c8ee/kube-rbac-proxy/0.log" Nov 24 17:22:58 crc kubenswrapper[4768]: I1124 17:22:58.175911 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-gtc95_d40b5804-6340-4be6-8da4-dca19827c8ee/manager/0.log" Nov 24 17:22:58 crc kubenswrapper[4768]: I1124 17:22:58.201324 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b2r7j_f5471d19-b623-4aa2-9a14-56d05fe236f8/kube-rbac-proxy/0.log" Nov 24 17:22:58 crc kubenswrapper[4768]: I1124 17:22:58.222086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b2r7j_f5471d19-b623-4aa2-9a14-56d05fe236f8/manager/0.log" Nov 24 17:23:00 crc kubenswrapper[4768]: I1124 17:23:00.580962 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:23:00 crc kubenswrapper[4768]: E1124 17:23:00.581609 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:23:11 crc kubenswrapper[4768]: I1124 17:23:11.581543 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:23:11 crc kubenswrapper[4768]: E1124 17:23:11.582532 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:23:14 crc kubenswrapper[4768]: I1124 17:23:14.026725 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dn5t9_622eb95d-1893-421b-890b-0fbd87dfa0b2/control-plane-machine-set-operator/0.log" Nov 24 17:23:14 crc kubenswrapper[4768]: I1124 17:23:14.230498 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mbvp9_063d4b06-d385-4749-8394-14041350b8e9/kube-rbac-proxy/0.log" Nov 24 17:23:14 crc kubenswrapper[4768]: I1124 17:23:14.256204 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mbvp9_063d4b06-d385-4749-8394-14041350b8e9/machine-api-operator/0.log" Nov 24 17:23:23 crc kubenswrapper[4768]: I1124 17:23:23.580731 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:23:23 crc kubenswrapper[4768]: E1124 17:23:23.581622 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:23:25 crc kubenswrapper[4768]: I1124 17:23:25.138945 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hbcgj_406ba9bc-fe9f-4e90-be27-c7947c0049cd/cert-manager-controller/0.log" Nov 24 17:23:25 crc kubenswrapper[4768]: I1124 17:23:25.261872 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vcfg5_553c5463-b1f6-410c-a1d6-032a7c57d30c/cert-manager-cainjector/0.log" Nov 24 17:23:25 crc kubenswrapper[4768]: I1124 17:23:25.380018 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-kd7w2_6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b/cert-manager-webhook/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.080416 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-6zplx_8f14df85-542b-433f-a661-79f1707a03ad/nmstate-console-plugin/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.274575 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p2zsh_75419742-7b67-4c11-9d45-2db75c1d8342/nmstate-handler/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.304639 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-5hkm5_5fd414a4-49e9-44b7-8207-e4edb7887dba/nmstate-metrics/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.305483 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-5hkm5_5fd414a4-49e9-44b7-8207-e4edb7887dba/kube-rbac-proxy/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.491942 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-pqvp5_0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b/nmstate-operator/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.513182 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rht2n_3755f3c6-8022-4edb-8efe-b858b58cf052/nmstate-webhook/0.log" Nov 24 17:23:36 crc kubenswrapper[4768]: I1124 17:23:36.581104 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:23:36 crc kubenswrapper[4768]: E1124 17:23:36.581404 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.224923 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-6rm47_7ec0e305-1a0c-449b-8c6c-9f5930582193/kube-rbac-proxy/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.228524 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-6rm47_7ec0e305-1a0c-449b-8c6c-9f5930582193/controller/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.337718 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-2szdn_98a7049b-d1ef-41d1-aa13-62bc2f1657ea/frr-k8s-webhook-server/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.431584 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.692659 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.693521 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.718776 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.763189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.895453 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.917471 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.978517 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:23:49 crc kubenswrapper[4768]: I1124 17:23:49.979195 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.107839 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.133995 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.134032 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.144640 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/controller/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.289677 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/frr-metrics/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.292992 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/kube-rbac-proxy/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.369938 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/kube-rbac-proxy-frr/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.500201 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/reloader/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.630343 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cc97d846-2sqgw_351e35d8-541a-43c5-b07d-affa44d1c013/manager/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.750440 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f6bc667bb-56fwx_ca825a3d-d8e1-45ce-af38-6874f0b3c498/webhook-server/0.log" Nov 24 17:23:50 crc kubenswrapper[4768]: I1124 17:23:50.924793 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9m6sf_6bd76705-44df-4419-a1d4-e294b3d010fd/kube-rbac-proxy/0.log" Nov 24 17:23:51 crc kubenswrapper[4768]: I1124 17:23:51.334871 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/frr/0.log" Nov 24 17:23:51 crc kubenswrapper[4768]: I1124 17:23:51.407008 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9m6sf_6bd76705-44df-4419-a1d4-e294b3d010fd/speaker/0.log" Nov 24 17:23:51 crc kubenswrapper[4768]: I1124 17:23:51.581369 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:23:51 crc kubenswrapper[4768]: E1124 17:23:51.581636 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.249269 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.420757 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.427456 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.505416 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.648544 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.660550 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.664856 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/extract/0.log" Nov 24 17:24:02 crc kubenswrapper[4768]: I1124 17:24:02.844011 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.024316 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.024901 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.053127 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.224426 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.277414 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.459198 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.547229 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/registry-server/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.580970 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:24:03 crc kubenswrapper[4768]: E1124 17:24:03.581252 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.671724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.692032 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.702824 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.859072 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:24:03 crc kubenswrapper[4768]: I1124 17:24:03.899922 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.055474 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.247960 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.286158 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.378397 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.382772 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/registry-server/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.431326 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.463501 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.519713 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/extract/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.599038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5zvk7_453d22cb-b151-4afd-8116-28d85514ca2c/marketplace-operator/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.686689 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.850526 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.870726 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:24:04 crc kubenswrapper[4768]: I1124 17:24:04.905883 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.020008 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.024202 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.139121 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/registry-server/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.204058 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.348232 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.376277 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.394582 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.563895 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.597672 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:24:05 crc kubenswrapper[4768]: I1124 17:24:05.785931 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/registry-server/0.log" Nov 24 17:24:15 crc kubenswrapper[4768]: I1124 17:24:15.581261 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:24:15 crc kubenswrapper[4768]: E1124 17:24:15.582121 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:24:27 crc kubenswrapper[4768]: I1124 17:24:27.581485 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:24:27 crc kubenswrapper[4768]: E1124 17:24:27.582362 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:24:42 crc kubenswrapper[4768]: I1124 17:24:42.580651 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:24:42 crc kubenswrapper[4768]: E1124 17:24:42.581423 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:24:53 crc kubenswrapper[4768]: I1124 17:24:53.580718 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:24:53 crc kubenswrapper[4768]: E1124 17:24:53.581463 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:25:05 crc kubenswrapper[4768]: I1124 17:25:05.581083 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:25:06 crc kubenswrapper[4768]: I1124 17:25:06.716423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6"} Nov 24 17:25:35 crc kubenswrapper[4768]: I1124 17:25:35.985700 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerID="f6dee8f6402463803d2c05da04a7e555ae851dee153adfcdd480683a86671f23" exitCode=0 Nov 24 17:25:35 crc kubenswrapper[4768]: I1124 17:25:35.986180 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qhddh/must-gather-7c9fk" event={"ID":"bd9c8e50-a7e3-49c9-a2dc-626d5324539a","Type":"ContainerDied","Data":"f6dee8f6402463803d2c05da04a7e555ae851dee153adfcdd480683a86671f23"} Nov 24 17:25:35 crc kubenswrapper[4768]: I1124 17:25:35.986896 4768 scope.go:117] "RemoveContainer" containerID="f6dee8f6402463803d2c05da04a7e555ae851dee153adfcdd480683a86671f23" Nov 24 17:25:36 crc kubenswrapper[4768]: I1124 17:25:36.745464 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhddh_must-gather-7c9fk_bd9c8e50-a7e3-49c9-a2dc-626d5324539a/gather/0.log" Nov 24 17:25:44 crc kubenswrapper[4768]: I1124 17:25:44.216462 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qhddh/must-gather-7c9fk"] Nov 24 17:25:44 crc kubenswrapper[4768]: I1124 17:25:44.217395 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qhddh/must-gather-7c9fk" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="copy" containerID="cri-o://c405567f39b0f40d3a7657636ff3146a97afc4cc965bf28cd6ea78ee3d8fbc49" gracePeriod=2 Nov 24 17:25:44 crc kubenswrapper[4768]: I1124 17:25:44.227090 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qhddh/must-gather-7c9fk"] Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.090663 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhddh_must-gather-7c9fk_bd9c8e50-a7e3-49c9-a2dc-626d5324539a/copy/0.log" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.091409 4768 generic.go:334] "Generic (PLEG): container finished" podID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerID="c405567f39b0f40d3a7657636ff3146a97afc4cc965bf28cd6ea78ee3d8fbc49" exitCode=143 Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.200961 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhddh_must-gather-7c9fk_bd9c8e50-a7e3-49c9-a2dc-626d5324539a/copy/0.log" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.201331 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.351137 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output\") pod \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.351316 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwds6\" (UniqueName: \"kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6\") pod \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\" (UID: \"bd9c8e50-a7e3-49c9-a2dc-626d5324539a\") " Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.358431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6" (OuterVolumeSpecName: "kube-api-access-nwds6") pod "bd9c8e50-a7e3-49c9-a2dc-626d5324539a" (UID: "bd9c8e50-a7e3-49c9-a2dc-626d5324539a"). InnerVolumeSpecName "kube-api-access-nwds6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.453802 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwds6\" (UniqueName: \"kubernetes.io/projected/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-kube-api-access-nwds6\") on node \"crc\" DevicePath \"\"" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.472170 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bd9c8e50-a7e3-49c9-a2dc-626d5324539a" (UID: "bd9c8e50-a7e3-49c9-a2dc-626d5324539a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.556044 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bd9c8e50-a7e3-49c9-a2dc-626d5324539a-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 17:25:45 crc kubenswrapper[4768]: I1124 17:25:45.592755 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" path="/var/lib/kubelet/pods/bd9c8e50-a7e3-49c9-a2dc-626d5324539a/volumes" Nov 24 17:25:46 crc kubenswrapper[4768]: I1124 17:25:46.100956 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qhddh_must-gather-7c9fk_bd9c8e50-a7e3-49c9-a2dc-626d5324539a/copy/0.log" Nov 24 17:25:46 crc kubenswrapper[4768]: I1124 17:25:46.101328 4768 scope.go:117] "RemoveContainer" containerID="c405567f39b0f40d3a7657636ff3146a97afc4cc965bf28cd6ea78ee3d8fbc49" Nov 24 17:25:46 crc kubenswrapper[4768]: I1124 17:25:46.101415 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qhddh/must-gather-7c9fk" Nov 24 17:25:46 crc kubenswrapper[4768]: I1124 17:25:46.122401 4768 scope.go:117] "RemoveContainer" containerID="f6dee8f6402463803d2c05da04a7e555ae851dee153adfcdd480683a86671f23" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.976349 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:12 crc kubenswrapper[4768]: E1124 17:26:12.977363 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" containerName="container-00" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977385 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" containerName="container-00" Nov 24 17:26:12 crc kubenswrapper[4768]: E1124 17:26:12.977450 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="gather" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977459 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="gather" Nov 24 17:26:12 crc kubenswrapper[4768]: E1124 17:26:12.977476 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="copy" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977484 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="copy" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977695 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="copy" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977713 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cbcfbc-59b6-4bc0-88da-9bd16f0ad59b" containerName="container-00" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.977736 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9c8e50-a7e3-49c9-a2dc-626d5324539a" containerName="gather" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.979536 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:12 crc kubenswrapper[4768]: I1124 17:26:12.985871 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.095448 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrp6\" (UniqueName: \"kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.095493 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.095675 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.197313 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrp6\" (UniqueName: \"kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.197356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.197450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.197920 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.197976 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.241506 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrp6\" (UniqueName: \"kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6\") pod \"redhat-marketplace-vs8ts\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.302714 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:13 crc kubenswrapper[4768]: I1124 17:26:13.816199 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:14 crc kubenswrapper[4768]: I1124 17:26:14.343073 4768 generic.go:334] "Generic (PLEG): container finished" podID="544f63ce-a98c-4661-81bb-04db912ed440" containerID="a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4" exitCode=0 Nov 24 17:26:14 crc kubenswrapper[4768]: I1124 17:26:14.343123 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerDied","Data":"a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4"} Nov 24 17:26:14 crc kubenswrapper[4768]: I1124 17:26:14.343501 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerStarted","Data":"1ba2f69c0eb1f3c381ec4fd6e29b5f78989c2b03af811a6d59c83a464cebc7d6"} Nov 24 17:26:14 crc kubenswrapper[4768]: I1124 17:26:14.347859 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 17:26:15 crc kubenswrapper[4768]: I1124 17:26:15.353293 4768 generic.go:334] "Generic (PLEG): container finished" podID="544f63ce-a98c-4661-81bb-04db912ed440" containerID="e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0" exitCode=0 Nov 24 17:26:15 crc kubenswrapper[4768]: I1124 17:26:15.353420 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerDied","Data":"e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0"} Nov 24 17:26:16 crc kubenswrapper[4768]: I1124 17:26:16.374280 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerStarted","Data":"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67"} Nov 24 17:26:16 crc kubenswrapper[4768]: I1124 17:26:16.398595 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vs8ts" podStartSLOduration=2.948881961 podStartE2EDuration="4.398564312s" podCreationTimestamp="2025-11-24 17:26:12 +0000 UTC" firstStartedPulling="2025-11-24 17:26:14.347646526 +0000 UTC m=+2055.594615184" lastFinishedPulling="2025-11-24 17:26:15.797328877 +0000 UTC m=+2057.044297535" observedRunningTime="2025-11-24 17:26:16.394065795 +0000 UTC m=+2057.641034453" watchObservedRunningTime="2025-11-24 17:26:16.398564312 +0000 UTC m=+2057.645532980" Nov 24 17:26:23 crc kubenswrapper[4768]: I1124 17:26:23.303641 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:23 crc kubenswrapper[4768]: I1124 17:26:23.304088 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:23 crc kubenswrapper[4768]: I1124 17:26:23.354805 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:23 crc kubenswrapper[4768]: I1124 17:26:23.475976 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:23 crc kubenswrapper[4768]: I1124 17:26:23.596053 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.446675 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vs8ts" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="registry-server" containerID="cri-o://db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67" gracePeriod=2 Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.859479 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.952565 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities\") pod \"544f63ce-a98c-4661-81bb-04db912ed440\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.952676 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content\") pod \"544f63ce-a98c-4661-81bb-04db912ed440\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.952754 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrp6\" (UniqueName: \"kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6\") pod \"544f63ce-a98c-4661-81bb-04db912ed440\" (UID: \"544f63ce-a98c-4661-81bb-04db912ed440\") " Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.953286 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities" (OuterVolumeSpecName: "utilities") pod "544f63ce-a98c-4661-81bb-04db912ed440" (UID: "544f63ce-a98c-4661-81bb-04db912ed440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:26:25 crc kubenswrapper[4768]: I1124 17:26:25.965499 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6" (OuterVolumeSpecName: "kube-api-access-mnrp6") pod "544f63ce-a98c-4661-81bb-04db912ed440" (UID: "544f63ce-a98c-4661-81bb-04db912ed440"). InnerVolumeSpecName "kube-api-access-mnrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.054905 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrp6\" (UniqueName: \"kubernetes.io/projected/544f63ce-a98c-4661-81bb-04db912ed440-kube-api-access-mnrp6\") on node \"crc\" DevicePath \"\"" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.054938 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.091704 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544f63ce-a98c-4661-81bb-04db912ed440" (UID: "544f63ce-a98c-4661-81bb-04db912ed440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.158032 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f63ce-a98c-4661-81bb-04db912ed440-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.457314 4768 generic.go:334] "Generic (PLEG): container finished" podID="544f63ce-a98c-4661-81bb-04db912ed440" containerID="db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67" exitCode=0 Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.457386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerDied","Data":"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67"} Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.457638 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vs8ts" event={"ID":"544f63ce-a98c-4661-81bb-04db912ed440","Type":"ContainerDied","Data":"1ba2f69c0eb1f3c381ec4fd6e29b5f78989c2b03af811a6d59c83a464cebc7d6"} Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.457660 4768 scope.go:117] "RemoveContainer" containerID="db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.457420 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vs8ts" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.473982 4768 scope.go:117] "RemoveContainer" containerID="e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.493626 4768 scope.go:117] "RemoveContainer" containerID="a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.497333 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.504893 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vs8ts"] Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.563195 4768 scope.go:117] "RemoveContainer" containerID="db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67" Nov 24 17:26:26 crc kubenswrapper[4768]: E1124 17:26:26.563834 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67\": container with ID starting with db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67 not found: ID does not exist" containerID="db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.563872 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67"} err="failed to get container status \"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67\": rpc error: code = NotFound desc = could not find container \"db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67\": container with ID starting with db99f4d4fe6e15f3e257ac32d96c52e70d1ae1a5cc9a90639473ed111379bf67 not found: ID does not exist" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.563899 4768 scope.go:117] "RemoveContainer" containerID="e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0" Nov 24 17:26:26 crc kubenswrapper[4768]: E1124 17:26:26.564284 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0\": container with ID starting with e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0 not found: ID does not exist" containerID="e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.564390 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0"} err="failed to get container status \"e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0\": rpc error: code = NotFound desc = could not find container \"e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0\": container with ID starting with e63a86c446d335b4bcb93e8aed35771725f1027c59ed16a31d6c7c14979c15d0 not found: ID does not exist" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.564410 4768 scope.go:117] "RemoveContainer" containerID="a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4" Nov 24 17:26:26 crc kubenswrapper[4768]: E1124 17:26:26.565240 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4\": container with ID starting with a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4 not found: ID does not exist" containerID="a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4" Nov 24 17:26:26 crc kubenswrapper[4768]: I1124 17:26:26.565300 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4"} err="failed to get container status \"a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4\": rpc error: code = NotFound desc = could not find container \"a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4\": container with ID starting with a13c1e0e084c0862f4fe85342cc531e63233ec30e294763b78df9ae0a07576d4 not found: ID does not exist" Nov 24 17:26:27 crc kubenswrapper[4768]: I1124 17:26:27.594413 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544f63ce-a98c-4661-81bb-04db912ed440" path="/var/lib/kubelet/pods/544f63ce-a98c-4661-81bb-04db912ed440/volumes" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.670983 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:26:48 crc kubenswrapper[4768]: E1124 17:26:48.671936 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="extract-utilities" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.671951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="extract-utilities" Nov 24 17:26:48 crc kubenswrapper[4768]: E1124 17:26:48.671978 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="extract-content" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.671984 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="extract-content" Nov 24 17:26:48 crc kubenswrapper[4768]: E1124 17:26:48.671999 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="registry-server" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.672008 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="registry-server" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.672204 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="544f63ce-a98c-4661-81bb-04db912ed440" containerName="registry-server" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.673888 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.683156 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.773132 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prcgw\" (UniqueName: \"kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.773222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.773417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.906085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.906193 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prcgw\" (UniqueName: \"kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.906307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.906819 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.907102 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:48 crc kubenswrapper[4768]: I1124 17:26:48.934118 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prcgw\" (UniqueName: \"kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw\") pod \"redhat-operators-9j5s5\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:49 crc kubenswrapper[4768]: I1124 17:26:49.073238 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:49 crc kubenswrapper[4768]: I1124 17:26:49.548681 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:26:49 crc kubenswrapper[4768]: I1124 17:26:49.649299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerStarted","Data":"7834dc3aa2b8466b51f2a278b0453fcdb1867109a36efdb55322ace13c498688"} Nov 24 17:26:50 crc kubenswrapper[4768]: I1124 17:26:50.659930 4768 generic.go:334] "Generic (PLEG): container finished" podID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerID="8b208c186f3225dfb11d463c5c654b22a6547297c40b571d857daf6be78caf82" exitCode=0 Nov 24 17:26:50 crc kubenswrapper[4768]: I1124 17:26:50.659993 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerDied","Data":"8b208c186f3225dfb11d463c5c654b22a6547297c40b571d857daf6be78caf82"} Nov 24 17:26:52 crc kubenswrapper[4768]: I1124 17:26:52.677138 4768 generic.go:334] "Generic (PLEG): container finished" podID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerID="1b664068e55d835aa5bd99177f26dd3c4d2aa7f2b35cd229ddc153fd756244da" exitCode=0 Nov 24 17:26:52 crc kubenswrapper[4768]: I1124 17:26:52.677194 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerDied","Data":"1b664068e55d835aa5bd99177f26dd3c4d2aa7f2b35cd229ddc153fd756244da"} Nov 24 17:26:53 crc kubenswrapper[4768]: I1124 17:26:53.685866 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerStarted","Data":"43172d67f4cfec3552f2dde7974cf0c0357e96d1ce8d9b3dc582b3fc91b3f637"} Nov 24 17:26:53 crc kubenswrapper[4768]: I1124 17:26:53.705133 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9j5s5" podStartSLOduration=3.232589166 podStartE2EDuration="5.705116651s" podCreationTimestamp="2025-11-24 17:26:48 +0000 UTC" firstStartedPulling="2025-11-24 17:26:50.66276108 +0000 UTC m=+2091.909729738" lastFinishedPulling="2025-11-24 17:26:53.135288575 +0000 UTC m=+2094.382257223" observedRunningTime="2025-11-24 17:26:53.703626689 +0000 UTC m=+2094.950595347" watchObservedRunningTime="2025-11-24 17:26:53.705116651 +0000 UTC m=+2094.952085309" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.282783 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.285504 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.305668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.414788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.414880 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.414961 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggrt5\" (UniqueName: \"kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.516982 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.517075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.517144 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggrt5\" (UniqueName: \"kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.517452 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.517635 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.536720 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggrt5\" (UniqueName: \"kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5\") pod \"certified-operators-d7h4h\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:54 crc kubenswrapper[4768]: I1124 17:26:54.666386 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:26:55 crc kubenswrapper[4768]: W1124 17:26:55.213701 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c0d370_2e34_4eee_8109_8836ecfcdef9.slice/crio-1b8ea8aaf5701f0fa2bcb98764b09c3680baf2a382d62e6c84470d5f8d1a3f2f WatchSource:0}: Error finding container 1b8ea8aaf5701f0fa2bcb98764b09c3680baf2a382d62e6c84470d5f8d1a3f2f: Status 404 returned error can't find the container with id 1b8ea8aaf5701f0fa2bcb98764b09c3680baf2a382d62e6c84470d5f8d1a3f2f Nov 24 17:26:55 crc kubenswrapper[4768]: I1124 17:26:55.221620 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:26:55 crc kubenswrapper[4768]: I1124 17:26:55.706955 4768 generic.go:334] "Generic (PLEG): container finished" podID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerID="c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c" exitCode=0 Nov 24 17:26:55 crc kubenswrapper[4768]: I1124 17:26:55.706998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerDied","Data":"c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c"} Nov 24 17:26:55 crc kubenswrapper[4768]: I1124 17:26:55.707042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerStarted","Data":"1b8ea8aaf5701f0fa2bcb98764b09c3680baf2a382d62e6c84470d5f8d1a3f2f"} Nov 24 17:26:56 crc kubenswrapper[4768]: I1124 17:26:56.719204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerStarted","Data":"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c"} Nov 24 17:26:57 crc kubenswrapper[4768]: I1124 17:26:57.728708 4768 generic.go:334] "Generic (PLEG): container finished" podID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerID="646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c" exitCode=0 Nov 24 17:26:57 crc kubenswrapper[4768]: I1124 17:26:57.728823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerDied","Data":"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c"} Nov 24 17:26:58 crc kubenswrapper[4768]: I1124 17:26:58.739208 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerStarted","Data":"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384"} Nov 24 17:26:58 crc kubenswrapper[4768]: I1124 17:26:58.760923 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d7h4h" podStartSLOduration=2.334017054 podStartE2EDuration="4.760903127s" podCreationTimestamp="2025-11-24 17:26:54 +0000 UTC" firstStartedPulling="2025-11-24 17:26:55.709438098 +0000 UTC m=+2096.956406756" lastFinishedPulling="2025-11-24 17:26:58.136324171 +0000 UTC m=+2099.383292829" observedRunningTime="2025-11-24 17:26:58.757540232 +0000 UTC m=+2100.004508890" watchObservedRunningTime="2025-11-24 17:26:58.760903127 +0000 UTC m=+2100.007871785" Nov 24 17:26:59 crc kubenswrapper[4768]: I1124 17:26:59.074388 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:59 crc kubenswrapper[4768]: I1124 17:26:59.074452 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:59 crc kubenswrapper[4768]: I1124 17:26:59.179778 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:26:59 crc kubenswrapper[4768]: I1124 17:26:59.793403 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:27:01 crc kubenswrapper[4768]: I1124 17:27:01.242554 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:27:01 crc kubenswrapper[4768]: I1124 17:27:01.763424 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9j5s5" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="registry-server" containerID="cri-o://43172d67f4cfec3552f2dde7974cf0c0357e96d1ce8d9b3dc582b3fc91b3f637" gracePeriod=2 Nov 24 17:27:02 crc kubenswrapper[4768]: I1124 17:27:02.772403 4768 generic.go:334] "Generic (PLEG): container finished" podID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerID="43172d67f4cfec3552f2dde7974cf0c0357e96d1ce8d9b3dc582b3fc91b3f637" exitCode=0 Nov 24 17:27:02 crc kubenswrapper[4768]: I1124 17:27:02.772479 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerDied","Data":"43172d67f4cfec3552f2dde7974cf0c0357e96d1ce8d9b3dc582b3fc91b3f637"} Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.313734 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.394092 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prcgw\" (UniqueName: \"kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw\") pod \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.394196 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities\") pod \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.394232 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content\") pod \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\" (UID: \"42bf9a47-95e8-4c34-aee4-d6c0bd62e406\") " Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.395507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities" (OuterVolumeSpecName: "utilities") pod "42bf9a47-95e8-4c34-aee4-d6c0bd62e406" (UID: "42bf9a47-95e8-4c34-aee4-d6c0bd62e406"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.401117 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw" (OuterVolumeSpecName: "kube-api-access-prcgw") pod "42bf9a47-95e8-4c34-aee4-d6c0bd62e406" (UID: "42bf9a47-95e8-4c34-aee4-d6c0bd62e406"). InnerVolumeSpecName "kube-api-access-prcgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.481553 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42bf9a47-95e8-4c34-aee4-d6c0bd62e406" (UID: "42bf9a47-95e8-4c34-aee4-d6c0bd62e406"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.496603 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.496643 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.496656 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prcgw\" (UniqueName: \"kubernetes.io/projected/42bf9a47-95e8-4c34-aee4-d6c0bd62e406-kube-api-access-prcgw\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.782383 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j5s5" event={"ID":"42bf9a47-95e8-4c34-aee4-d6c0bd62e406","Type":"ContainerDied","Data":"7834dc3aa2b8466b51f2a278b0453fcdb1867109a36efdb55322ace13c498688"} Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.782436 4768 scope.go:117] "RemoveContainer" containerID="43172d67f4cfec3552f2dde7974cf0c0357e96d1ce8d9b3dc582b3fc91b3f637" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.782465 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j5s5" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.800780 4768 scope.go:117] "RemoveContainer" containerID="1b664068e55d835aa5bd99177f26dd3c4d2aa7f2b35cd229ddc153fd756244da" Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.807515 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.822719 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9j5s5"] Nov 24 17:27:03 crc kubenswrapper[4768]: I1124 17:27:03.824019 4768 scope.go:117] "RemoveContainer" containerID="8b208c186f3225dfb11d463c5c654b22a6547297c40b571d857daf6be78caf82" Nov 24 17:27:04 crc kubenswrapper[4768]: I1124 17:27:04.666873 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:04 crc kubenswrapper[4768]: I1124 17:27:04.666935 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:04 crc kubenswrapper[4768]: I1124 17:27:04.728309 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:04 crc kubenswrapper[4768]: I1124 17:27:04.845941 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:05 crc kubenswrapper[4768]: I1124 17:27:05.594937 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" path="/var/lib/kubelet/pods/42bf9a47-95e8-4c34-aee4-d6c0bd62e406/volumes" Nov 24 17:27:06 crc kubenswrapper[4768]: I1124 17:27:06.648755 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:27:06 crc kubenswrapper[4768]: I1124 17:27:06.807477 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d7h4h" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="registry-server" containerID="cri-o://5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384" gracePeriod=2 Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.779322 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.820383 4768 generic.go:334] "Generic (PLEG): container finished" podID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerID="5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384" exitCode=0 Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.820422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerDied","Data":"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384"} Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.820455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d7h4h" event={"ID":"a7c0d370-2e34-4eee-8109-8836ecfcdef9","Type":"ContainerDied","Data":"1b8ea8aaf5701f0fa2bcb98764b09c3680baf2a382d62e6c84470d5f8d1a3f2f"} Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.820475 4768 scope.go:117] "RemoveContainer" containerID="5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.820497 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d7h4h" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.844827 4768 scope.go:117] "RemoveContainer" containerID="646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.865276 4768 scope.go:117] "RemoveContainer" containerID="c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.907247 4768 scope.go:117] "RemoveContainer" containerID="5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384" Nov 24 17:27:07 crc kubenswrapper[4768]: E1124 17:27:07.907734 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384\": container with ID starting with 5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384 not found: ID does not exist" containerID="5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.907772 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384"} err="failed to get container status \"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384\": rpc error: code = NotFound desc = could not find container \"5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384\": container with ID starting with 5dae5c3f47c56a4702a7bcae8da9ab577e0cc60cefa93815ab7db0741cd94384 not found: ID does not exist" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.907794 4768 scope.go:117] "RemoveContainer" containerID="646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c" Nov 24 17:27:07 crc kubenswrapper[4768]: E1124 17:27:07.908082 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c\": container with ID starting with 646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c not found: ID does not exist" containerID="646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.908112 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c"} err="failed to get container status \"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c\": rpc error: code = NotFound desc = could not find container \"646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c\": container with ID starting with 646d0631d6b86be56a3415f8f124b52b5f3c65343fb6a38e48f2120d29537f4c not found: ID does not exist" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.908130 4768 scope.go:117] "RemoveContainer" containerID="c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c" Nov 24 17:27:07 crc kubenswrapper[4768]: E1124 17:27:07.908438 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c\": container with ID starting with c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c not found: ID does not exist" containerID="c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.908465 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c"} err="failed to get container status \"c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c\": rpc error: code = NotFound desc = could not find container \"c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c\": container with ID starting with c39b41c5f18667e6afef9f2e8b83a896008cb70e4ca711c587f9d55930dc426c not found: ID does not exist" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.910222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggrt5\" (UniqueName: \"kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5\") pod \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.910310 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities\") pod \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.910462 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content\") pod \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\" (UID: \"a7c0d370-2e34-4eee-8109-8836ecfcdef9\") " Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.911121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities" (OuterVolumeSpecName: "utilities") pod "a7c0d370-2e34-4eee-8109-8836ecfcdef9" (UID: "a7c0d370-2e34-4eee-8109-8836ecfcdef9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.918816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5" (OuterVolumeSpecName: "kube-api-access-ggrt5") pod "a7c0d370-2e34-4eee-8109-8836ecfcdef9" (UID: "a7c0d370-2e34-4eee-8109-8836ecfcdef9"). InnerVolumeSpecName "kube-api-access-ggrt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:27:07 crc kubenswrapper[4768]: I1124 17:27:07.957255 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7c0d370-2e34-4eee-8109-8836ecfcdef9" (UID: "a7c0d370-2e34-4eee-8109-8836ecfcdef9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:27:08 crc kubenswrapper[4768]: I1124 17:27:08.013027 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:08 crc kubenswrapper[4768]: I1124 17:27:08.013059 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7c0d370-2e34-4eee-8109-8836ecfcdef9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:08 crc kubenswrapper[4768]: I1124 17:27:08.013073 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggrt5\" (UniqueName: \"kubernetes.io/projected/a7c0d370-2e34-4eee-8109-8836ecfcdef9-kube-api-access-ggrt5\") on node \"crc\" DevicePath \"\"" Nov 24 17:27:08 crc kubenswrapper[4768]: I1124 17:27:08.155992 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:27:08 crc kubenswrapper[4768]: I1124 17:27:08.163287 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d7h4h"] Nov 24 17:27:09 crc kubenswrapper[4768]: I1124 17:27:09.593748 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" path="/var/lib/kubelet/pods/a7c0d370-2e34-4eee-8109-8836ecfcdef9/volumes" Nov 24 17:27:28 crc kubenswrapper[4768]: I1124 17:27:28.445125 4768 scope.go:117] "RemoveContainer" containerID="9e2a2d4dff517cf127338fd9171de8d0316d7ca5bf8c3d92a367eacfd2c08438" Nov 24 17:27:34 crc kubenswrapper[4768]: I1124 17:27:34.892789 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:27:34 crc kubenswrapper[4768]: I1124 17:27:34.893415 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.068771 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069691 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="extract-content" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069703 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="extract-content" Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069721 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="extract-utilities" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069728 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="extract-utilities" Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069742 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069750 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069770 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="extract-content" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069778 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="extract-content" Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069794 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069801 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: E1124 17:28:04.069814 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="extract-utilities" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069819 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="extract-utilities" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.069989 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bf9a47-95e8-4c34-aee4-d6c0bd62e406" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.070010 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c0d370-2e34-4eee-8109-8836ecfcdef9" containerName="registry-server" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.071318 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.077735 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.241762 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.241829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.241864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqsg\" (UniqueName: \"kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.343500 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.343567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldqsg\" (UniqueName: \"kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.343770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.344065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.344170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.362762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldqsg\" (UniqueName: \"kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg\") pod \"community-operators-kmjd7\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.388880 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.892636 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.892983 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:28:04 crc kubenswrapper[4768]: I1124 17:28:04.924088 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:05 crc kubenswrapper[4768]: I1124 17:28:05.377898 4768 generic.go:334] "Generic (PLEG): container finished" podID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerID="9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6" exitCode=0 Nov 24 17:28:05 crc kubenswrapper[4768]: I1124 17:28:05.377936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerDied","Data":"9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6"} Nov 24 17:28:05 crc kubenswrapper[4768]: I1124 17:28:05.377962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerStarted","Data":"5e7643818c9684786fa2fb7d622edc28197906e5626bc1fae5f6f832c0e9f33f"} Nov 24 17:28:06 crc kubenswrapper[4768]: I1124 17:28:06.393463 4768 generic.go:334] "Generic (PLEG): container finished" podID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerID="9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2" exitCode=0 Nov 24 17:28:06 crc kubenswrapper[4768]: I1124 17:28:06.394041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerDied","Data":"9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2"} Nov 24 17:28:06 crc kubenswrapper[4768]: E1124 17:28:06.552062 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f2c3a6c_754c_41f9_833c_7ce21db1a9ae.slice/crio-conmon-9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f2c3a6c_754c_41f9_833c_7ce21db1a9ae.slice/crio-9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2.scope\": RecentStats: unable to find data in memory cache]" Nov 24 17:28:07 crc kubenswrapper[4768]: I1124 17:28:07.403443 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerStarted","Data":"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71"} Nov 24 17:28:07 crc kubenswrapper[4768]: I1124 17:28:07.427403 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kmjd7" podStartSLOduration=2.007129325 podStartE2EDuration="3.427383741s" podCreationTimestamp="2025-11-24 17:28:04 +0000 UTC" firstStartedPulling="2025-11-24 17:28:05.379999216 +0000 UTC m=+2166.626967874" lastFinishedPulling="2025-11-24 17:28:06.800253612 +0000 UTC m=+2168.047222290" observedRunningTime="2025-11-24 17:28:07.422127892 +0000 UTC m=+2168.669096550" watchObservedRunningTime="2025-11-24 17:28:07.427383741 +0000 UTC m=+2168.674352399" Nov 24 17:28:14 crc kubenswrapper[4768]: I1124 17:28:14.389439 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:14 crc kubenswrapper[4768]: I1124 17:28:14.390007 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:14 crc kubenswrapper[4768]: I1124 17:28:14.446414 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:14 crc kubenswrapper[4768]: I1124 17:28:14.523613 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:14 crc kubenswrapper[4768]: I1124 17:28:14.679573 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:16 crc kubenswrapper[4768]: I1124 17:28:16.490515 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kmjd7" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="registry-server" containerID="cri-o://dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71" gracePeriod=2 Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.431601 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.482257 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content\") pod \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.482388 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities\") pod \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.482440 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldqsg\" (UniqueName: \"kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg\") pod \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\" (UID: \"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae\") " Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.483473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities" (OuterVolumeSpecName: "utilities") pod "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" (UID: "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.489566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg" (OuterVolumeSpecName: "kube-api-access-ldqsg") pod "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" (UID: "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae"). InnerVolumeSpecName "kube-api-access-ldqsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.501484 4768 generic.go:334] "Generic (PLEG): container finished" podID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerID="dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71" exitCode=0 Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.501550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerDied","Data":"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71"} Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.501591 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kmjd7" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.501626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kmjd7" event={"ID":"0f2c3a6c-754c-41f9-833c-7ce21db1a9ae","Type":"ContainerDied","Data":"5e7643818c9684786fa2fb7d622edc28197906e5626bc1fae5f6f832c0e9f33f"} Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.501651 4768 scope.go:117] "RemoveContainer" containerID="dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.539986 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" (UID: "0f2c3a6c-754c-41f9-833c-7ce21db1a9ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.549806 4768 scope.go:117] "RemoveContainer" containerID="9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.580592 4768 scope.go:117] "RemoveContainer" containerID="9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.584336 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.584370 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.584400 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldqsg\" (UniqueName: \"kubernetes.io/projected/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae-kube-api-access-ldqsg\") on node \"crc\" DevicePath \"\"" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.614380 4768 scope.go:117] "RemoveContainer" containerID="dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71" Nov 24 17:28:17 crc kubenswrapper[4768]: E1124 17:28:17.614899 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71\": container with ID starting with dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71 not found: ID does not exist" containerID="dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.614965 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71"} err="failed to get container status \"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71\": rpc error: code = NotFound desc = could not find container \"dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71\": container with ID starting with dc774a8cb876270df9cf8e5a6b0343f9aff0965ac5707c5da030f473e47d9e71 not found: ID does not exist" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.615000 4768 scope.go:117] "RemoveContainer" containerID="9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2" Nov 24 17:28:17 crc kubenswrapper[4768]: E1124 17:28:17.635136 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2\": container with ID starting with 9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2 not found: ID does not exist" containerID="9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.635182 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2"} err="failed to get container status \"9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2\": rpc error: code = NotFound desc = could not find container \"9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2\": container with ID starting with 9f7052ae6cefd1763a0ee5b0e1a211ebdf9d37e67e615ae09bd214587b381ea2 not found: ID does not exist" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.635213 4768 scope.go:117] "RemoveContainer" containerID="9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6" Nov 24 17:28:17 crc kubenswrapper[4768]: E1124 17:28:17.635590 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6\": container with ID starting with 9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6 not found: ID does not exist" containerID="9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.635638 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6"} err="failed to get container status \"9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6\": rpc error: code = NotFound desc = could not find container \"9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6\": container with ID starting with 9d776314b03ef7a8aa46eb3c9e8a0f3f71f5cc2852d5884d2e1f164f95bce5b6 not found: ID does not exist" Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.827424 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:17 crc kubenswrapper[4768]: I1124 17:28:17.835264 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kmjd7"] Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.591676 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" path="/var/lib/kubelet/pods/0f2c3a6c-754c-41f9-833c-7ce21db1a9ae/volumes" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.851960 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmfkt/must-gather-k8kt2"] Nov 24 17:28:19 crc kubenswrapper[4768]: E1124 17:28:19.852856 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="extract-utilities" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.852883 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="extract-utilities" Nov 24 17:28:19 crc kubenswrapper[4768]: E1124 17:28:19.852910 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="extract-content" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.852919 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="extract-content" Nov 24 17:28:19 crc kubenswrapper[4768]: E1124 17:28:19.852950 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="registry-server" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.852958 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="registry-server" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.853229 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2c3a6c-754c-41f9-833c-7ce21db1a9ae" containerName="registry-server" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.854659 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.856881 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kmfkt"/"default-dockercfg-4ptpt" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.856908 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kmfkt"/"kube-root-ca.crt" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.857206 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kmfkt"/"openshift-service-ca.crt" Nov 24 17:28:19 crc kubenswrapper[4768]: I1124 17:28:19.863393 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kmfkt/must-gather-k8kt2"] Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.026625 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.026804 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dt27\" (UniqueName: \"kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.128685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.128775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dt27\" (UniqueName: \"kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.129208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.146143 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dt27\" (UniqueName: \"kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27\") pod \"must-gather-k8kt2\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.179629 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:28:20 crc kubenswrapper[4768]: I1124 17:28:20.696964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kmfkt/must-gather-k8kt2"] Nov 24 17:28:21 crc kubenswrapper[4768]: I1124 17:28:21.537109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" event={"ID":"fb445038-f451-4347-8f74-15048f7cfb4b","Type":"ContainerStarted","Data":"31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314"} Nov 24 17:28:21 crc kubenswrapper[4768]: I1124 17:28:21.537394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" event={"ID":"fb445038-f451-4347-8f74-15048f7cfb4b","Type":"ContainerStarted","Data":"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379"} Nov 24 17:28:21 crc kubenswrapper[4768]: I1124 17:28:21.537408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" event={"ID":"fb445038-f451-4347-8f74-15048f7cfb4b","Type":"ContainerStarted","Data":"9523049e82291edd3c31330431b78d0e1326d89d874b47c1f6a0665701bd2050"} Nov 24 17:28:21 crc kubenswrapper[4768]: I1124 17:28:21.558430 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" podStartSLOduration=2.5584110730000003 podStartE2EDuration="2.558411073s" podCreationTimestamp="2025-11-24 17:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:28:21.551980261 +0000 UTC m=+2182.798948919" watchObservedRunningTime="2025-11-24 17:28:21.558411073 +0000 UTC m=+2182.805379731" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.711803 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-46hhq"] Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.714259 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.831314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpwc\" (UniqueName: \"kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.831393 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.932664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpwc\" (UniqueName: \"kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.932732 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.932814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:24 crc kubenswrapper[4768]: I1124 17:28:24.955433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpwc\" (UniqueName: \"kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc\") pod \"crc-debug-46hhq\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:25 crc kubenswrapper[4768]: I1124 17:28:25.033242 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:28:25 crc kubenswrapper[4768]: W1124 17:28:25.064242 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a77235c_99f3_4aed_b02e_137537cd9424.slice/crio-f885658d682ff8ec5b9f79f110e7b322cabfc2077172b57cb30b4555187d35b7 WatchSource:0}: Error finding container f885658d682ff8ec5b9f79f110e7b322cabfc2077172b57cb30b4555187d35b7: Status 404 returned error can't find the container with id f885658d682ff8ec5b9f79f110e7b322cabfc2077172b57cb30b4555187d35b7 Nov 24 17:28:25 crc kubenswrapper[4768]: I1124 17:28:25.575723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" event={"ID":"9a77235c-99f3-4aed-b02e-137537cd9424","Type":"ContainerStarted","Data":"a31b4d64b371cdba6790f9d4d2b6d8b9dca30df8d7892cc04ba30e697e825e91"} Nov 24 17:28:25 crc kubenswrapper[4768]: I1124 17:28:25.576183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" event={"ID":"9a77235c-99f3-4aed-b02e-137537cd9424","Type":"ContainerStarted","Data":"f885658d682ff8ec5b9f79f110e7b322cabfc2077172b57cb30b4555187d35b7"} Nov 24 17:28:25 crc kubenswrapper[4768]: I1124 17:28:25.598736 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" podStartSLOduration=1.598718343 podStartE2EDuration="1.598718343s" podCreationTimestamp="2025-11-24 17:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 17:28:25.590709777 +0000 UTC m=+2186.837678435" watchObservedRunningTime="2025-11-24 17:28:25.598718343 +0000 UTC m=+2186.845687001" Nov 24 17:28:34 crc kubenswrapper[4768]: I1124 17:28:34.893444 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:28:34 crc kubenswrapper[4768]: I1124 17:28:34.893878 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:28:34 crc kubenswrapper[4768]: I1124 17:28:34.893920 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:28:34 crc kubenswrapper[4768]: I1124 17:28:34.894434 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:28:34 crc kubenswrapper[4768]: I1124 17:28:34.894476 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6" gracePeriod=600 Nov 24 17:28:35 crc kubenswrapper[4768]: I1124 17:28:35.711099 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6" exitCode=0 Nov 24 17:28:35 crc kubenswrapper[4768]: I1124 17:28:35.711161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6"} Nov 24 17:28:35 crc kubenswrapper[4768]: I1124 17:28:35.711447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerStarted","Data":"24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe"} Nov 24 17:28:35 crc kubenswrapper[4768]: I1124 17:28:35.711464 4768 scope.go:117] "RemoveContainer" containerID="99fafeeec46e7b0911c1c05e132ff14ffe8d4a4847dc33673701437fb65fb79c" Nov 24 17:28:58 crc kubenswrapper[4768]: I1124 17:28:58.943726 4768 generic.go:334] "Generic (PLEG): container finished" podID="9a77235c-99f3-4aed-b02e-137537cd9424" containerID="a31b4d64b371cdba6790f9d4d2b6d8b9dca30df8d7892cc04ba30e697e825e91" exitCode=0 Nov 24 17:28:58 crc kubenswrapper[4768]: I1124 17:28:58.943789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" event={"ID":"9a77235c-99f3-4aed-b02e-137537cd9424","Type":"ContainerDied","Data":"a31b4d64b371cdba6790f9d4d2b6d8b9dca30df8d7892cc04ba30e697e825e91"} Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.054697 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.107321 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-46hhq"] Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.113678 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-46hhq"] Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.138487 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host\") pod \"9a77235c-99f3-4aed-b02e-137537cd9424\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.138556 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lpwc\" (UniqueName: \"kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc\") pod \"9a77235c-99f3-4aed-b02e-137537cd9424\" (UID: \"9a77235c-99f3-4aed-b02e-137537cd9424\") " Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.138622 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host" (OuterVolumeSpecName: "host") pod "9a77235c-99f3-4aed-b02e-137537cd9424" (UID: "9a77235c-99f3-4aed-b02e-137537cd9424"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.139100 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a77235c-99f3-4aed-b02e-137537cd9424-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.143688 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc" (OuterVolumeSpecName: "kube-api-access-5lpwc") pod "9a77235c-99f3-4aed-b02e-137537cd9424" (UID: "9a77235c-99f3-4aed-b02e-137537cd9424"). InnerVolumeSpecName "kube-api-access-5lpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.241560 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lpwc\" (UniqueName: \"kubernetes.io/projected/9a77235c-99f3-4aed-b02e-137537cd9424-kube-api-access-5lpwc\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.961384 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f885658d682ff8ec5b9f79f110e7b322cabfc2077172b57cb30b4555187d35b7" Nov 24 17:29:00 crc kubenswrapper[4768]: I1124 17:29:00.961430 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-46hhq" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.343948 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-t7bfd"] Nov 24 17:29:01 crc kubenswrapper[4768]: E1124 17:29:01.346073 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a77235c-99f3-4aed-b02e-137537cd9424" containerName="container-00" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.346197 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a77235c-99f3-4aed-b02e-137537cd9424" containerName="container-00" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.346880 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a77235c-99f3-4aed-b02e-137537cd9424" containerName="container-00" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.347592 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.462860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswsf\" (UniqueName: \"kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.462930 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.564409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswsf\" (UniqueName: \"kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.564692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.564801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.583658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswsf\" (UniqueName: \"kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf\") pod \"crc-debug-t7bfd\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.593453 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a77235c-99f3-4aed-b02e-137537cd9424" path="/var/lib/kubelet/pods/9a77235c-99f3-4aed-b02e-137537cd9424/volumes" Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.671140 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:01 crc kubenswrapper[4768]: W1124 17:29:01.698572 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a28fd33_47fe_407d_a56b_be4d25e58de5.slice/crio-e7c2f864578e12d8646b08b8c14f476908a2eb98fbdd8441bbf2afbd83ada55b WatchSource:0}: Error finding container e7c2f864578e12d8646b08b8c14f476908a2eb98fbdd8441bbf2afbd83ada55b: Status 404 returned error can't find the container with id e7c2f864578e12d8646b08b8c14f476908a2eb98fbdd8441bbf2afbd83ada55b Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.972874 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" event={"ID":"7a28fd33-47fe-407d-a56b-be4d25e58de5","Type":"ContainerStarted","Data":"df6c6cdf77ddacafe5c7a93332c840c82a4dcf2eee76ad3649a0764097e2d7ca"} Nov 24 17:29:01 crc kubenswrapper[4768]: I1124 17:29:01.973508 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" event={"ID":"7a28fd33-47fe-407d-a56b-be4d25e58de5","Type":"ContainerStarted","Data":"e7c2f864578e12d8646b08b8c14f476908a2eb98fbdd8441bbf2afbd83ada55b"} Nov 24 17:29:02 crc kubenswrapper[4768]: I1124 17:29:02.352281 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-t7bfd"] Nov 24 17:29:02 crc kubenswrapper[4768]: I1124 17:29:02.359305 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-t7bfd"] Nov 24 17:29:02 crc kubenswrapper[4768]: I1124 17:29:02.984899 4768 generic.go:334] "Generic (PLEG): container finished" podID="7a28fd33-47fe-407d-a56b-be4d25e58de5" containerID="df6c6cdf77ddacafe5c7a93332c840c82a4dcf2eee76ad3649a0764097e2d7ca" exitCode=0 Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.084773 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.199198 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host\") pod \"7a28fd33-47fe-407d-a56b-be4d25e58de5\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.199856 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswsf\" (UniqueName: \"kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf\") pod \"7a28fd33-47fe-407d-a56b-be4d25e58de5\" (UID: \"7a28fd33-47fe-407d-a56b-be4d25e58de5\") " Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.199999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host" (OuterVolumeSpecName: "host") pod "7a28fd33-47fe-407d-a56b-be4d25e58de5" (UID: "7a28fd33-47fe-407d-a56b-be4d25e58de5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.201505 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a28fd33-47fe-407d-a56b-be4d25e58de5-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.206560 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf" (OuterVolumeSpecName: "kube-api-access-lswsf") pod "7a28fd33-47fe-407d-a56b-be4d25e58de5" (UID: "7a28fd33-47fe-407d-a56b-be4d25e58de5"). InnerVolumeSpecName "kube-api-access-lswsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.303725 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lswsf\" (UniqueName: \"kubernetes.io/projected/7a28fd33-47fe-407d-a56b-be4d25e58de5-kube-api-access-lswsf\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.516066 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-n68ws"] Nov 24 17:29:03 crc kubenswrapper[4768]: E1124 17:29:03.516459 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a28fd33-47fe-407d-a56b-be4d25e58de5" containerName="container-00" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.516472 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a28fd33-47fe-407d-a56b-be4d25e58de5" containerName="container-00" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.516645 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a28fd33-47fe-407d-a56b-be4d25e58de5" containerName="container-00" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.517233 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.590523 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a28fd33-47fe-407d-a56b-be4d25e58de5" path="/var/lib/kubelet/pods/7a28fd33-47fe-407d-a56b-be4d25e58de5/volumes" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.608621 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.608713 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2ck\" (UniqueName: \"kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.710380 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.710488 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp2ck\" (UniqueName: \"kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.711178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.725875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp2ck\" (UniqueName: \"kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck\") pod \"crc-debug-n68ws\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.834674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:03 crc kubenswrapper[4768]: W1124 17:29:03.869006 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd928b942_3a81_4607_8d34_600ba1825bbc.slice/crio-6dbd29d7688243fc800d8ec418cd711c5e013111a8b3e75116fd5369c2e2361e WatchSource:0}: Error finding container 6dbd29d7688243fc800d8ec418cd711c5e013111a8b3e75116fd5369c2e2361e: Status 404 returned error can't find the container with id 6dbd29d7688243fc800d8ec418cd711c5e013111a8b3e75116fd5369c2e2361e Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.995164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" event={"ID":"d928b942-3a81-4607-8d34-600ba1825bbc","Type":"ContainerStarted","Data":"6dbd29d7688243fc800d8ec418cd711c5e013111a8b3e75116fd5369c2e2361e"} Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.997198 4768 scope.go:117] "RemoveContainer" containerID="df6c6cdf77ddacafe5c7a93332c840c82a4dcf2eee76ad3649a0764097e2d7ca" Nov 24 17:29:03 crc kubenswrapper[4768]: I1124 17:29:03.997329 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-t7bfd" Nov 24 17:29:05 crc kubenswrapper[4768]: I1124 17:29:05.007041 4768 generic.go:334] "Generic (PLEG): container finished" podID="d928b942-3a81-4607-8d34-600ba1825bbc" containerID="e72ba9915571884c31611581249e894660bfe44d362c1f6df56a5a3c86796414" exitCode=0 Nov 24 17:29:05 crc kubenswrapper[4768]: I1124 17:29:05.007077 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" event={"ID":"d928b942-3a81-4607-8d34-600ba1825bbc","Type":"ContainerDied","Data":"e72ba9915571884c31611581249e894660bfe44d362c1f6df56a5a3c86796414"} Nov 24 17:29:05 crc kubenswrapper[4768]: I1124 17:29:05.041588 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-n68ws"] Nov 24 17:29:05 crc kubenswrapper[4768]: I1124 17:29:05.049150 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmfkt/crc-debug-n68ws"] Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.110745 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.254095 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host" (OuterVolumeSpecName: "host") pod "d928b942-3a81-4607-8d34-600ba1825bbc" (UID: "d928b942-3a81-4607-8d34-600ba1825bbc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.254540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host\") pod \"d928b942-3a81-4607-8d34-600ba1825bbc\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.254796 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp2ck\" (UniqueName: \"kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck\") pod \"d928b942-3a81-4607-8d34-600ba1825bbc\" (UID: \"d928b942-3a81-4607-8d34-600ba1825bbc\") " Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.255438 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d928b942-3a81-4607-8d34-600ba1825bbc-host\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.265571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck" (OuterVolumeSpecName: "kube-api-access-kp2ck") pod "d928b942-3a81-4607-8d34-600ba1825bbc" (UID: "d928b942-3a81-4607-8d34-600ba1825bbc"). InnerVolumeSpecName "kube-api-access-kp2ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:29:06 crc kubenswrapper[4768]: I1124 17:29:06.357164 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp2ck\" (UniqueName: \"kubernetes.io/projected/d928b942-3a81-4607-8d34-600ba1825bbc-kube-api-access-kp2ck\") on node \"crc\" DevicePath \"\"" Nov 24 17:29:07 crc kubenswrapper[4768]: I1124 17:29:07.023256 4768 scope.go:117] "RemoveContainer" containerID="e72ba9915571884c31611581249e894660bfe44d362c1f6df56a5a3c86796414" Nov 24 17:29:07 crc kubenswrapper[4768]: I1124 17:29:07.023300 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/crc-debug-n68ws" Nov 24 17:29:07 crc kubenswrapper[4768]: I1124 17:29:07.591689 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d928b942-3a81-4607-8d34-600ba1825bbc" path="/var/lib/kubelet/pods/d928b942-3a81-4607-8d34-600ba1825bbc/volumes" Nov 24 17:29:23 crc kubenswrapper[4768]: I1124 17:29:23.292575 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54d9965d5d-g2r7n_0eb91316-55e3-466f-bc29-314359383931/barbican-api/0.log" Nov 24 17:29:23 crc kubenswrapper[4768]: I1124 17:29:23.372656 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-54d9965d5d-g2r7n_0eb91316-55e3-466f-bc29-314359383931/barbican-api-log/0.log" Nov 24 17:29:23 crc kubenswrapper[4768]: I1124 17:29:23.460125 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fdbb4868-m84ml_758f8654-5012-43b2-a4b5-adc902722254/barbican-keystone-listener/0.log" Nov 24 17:29:23 crc kubenswrapper[4768]: I1124 17:29:23.953060 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fdbb4868-m84ml_758f8654-5012-43b2-a4b5-adc902722254/barbican-keystone-listener-log/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.037091 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-864dc88cf9-8c7r4_af648a4f-aca8-4b51-8650-6990ae26b259/barbican-worker/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.042122 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-864dc88cf9-8c7r4_af648a4f-aca8-4b51-8650-6990ae26b259/barbican-worker-log/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.127320 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/ceilometer-central-agent/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.226943 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/ceilometer-notification-agent/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.228711 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/proxy-httpd/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.317320 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_509cc4fd-7197-418e-9536-6024e2a95f58/sg-core/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.393390 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3/cinder-api-log/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.455945 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ed6ebd2-f689-4dc8-b7d3-3558cb4b53e3/cinder-api/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.624928 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3b85390f-acde-4350-8c18-1f588ffa8ab5/probe/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.657918 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3b85390f-acde-4350-8c18-1f588ffa8ab5/cinder-scheduler/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.744822 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/init/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.939342 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/init/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.951071 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3df486b9-bc37-4240-9ed2-76dc84b54031/glance-httpd/0.log" Nov 24 17:29:24 crc kubenswrapper[4768]: I1124 17:29:24.981174 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-xcjb4_7aad7301-e116-40bb-9af0-f19afd1d17b4/dnsmasq-dns/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.118250 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3df486b9-bc37-4240-9ed2-76dc84b54031/glance-log/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.148137 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6eb8b800-a966-48fe-8075-4709302ee14d/glance-log/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.152676 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6eb8b800-a966-48fe-8075-4709302ee14d/glance-httpd/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.332288 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/init/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.483088 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/init/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.504851 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/ironic-api-log/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.591537 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-798b498bb4-66crl_194cfeda-1348-4917-bb28-8cde275f7caa/ironic-api/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.675818 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.853898 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.890683 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:29:25 crc kubenswrapper[4768]: I1124 17:29:25.890705 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.143606 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.158404 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.541038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/init/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.672259 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-python-agent-init/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.885151 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/httpboot/0.log" Nov 24 17:29:26 crc kubenswrapper[4768]: I1124 17:29:26.981161 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.139629 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ironic-conductor/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.150019 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/ramdisk-logs/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.314558 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.375979 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.576764 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.599731 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.642143 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-jtdld_443cde2a-91e0-404e-a067-00558608d888/ironic-db-sync/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.736231 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_fa2e1386-88f0-4b0c-b4ff-dae7aad18cbd/pxe-init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.764398 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:29:27 crc kubenswrapper[4768]: I1124 17:29:27.985960 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.002201 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.005729 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.175078 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-pxe-init/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.183704 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/inspector-httpboot/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.207098 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector/1.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.208415 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-python-agent-init/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.242808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector/2.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.357308 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ramdisk-logs/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.394619 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_4150a56f-5273-4601-8abd-53554fee9e46/ironic-inspector-httpd/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.460382 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-hk9hx_d39158e2-1592-48f9-ba0e-198ab1030790/ironic-inspector-db-sync/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.565973 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-cb4d89897-bnsh5_26b563bb-da9a-43fe-b201-9f77ed0d0ddd/ironic-neutron-agent/2.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.604793 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-cb4d89897-bnsh5_26b563bb-da9a-43fe-b201-9f77ed0d0ddd/ironic-neutron-agent/1.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.771786 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_be754be8-e18d-4413-bf31-5258e9ad4544/kube-state-metrics/0.log" Nov 24 17:29:28 crc kubenswrapper[4768]: I1124 17:29:28.865640 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74667f8554-ph5sd_eff6ece5-de21-4541-96d3-7a82e5a1d789/keystone-api/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.089458 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c9b47fdf7-ztl8b_4ffadf60-9eff-4bf9-b0bd-9480cbd0d917/neutron-api/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.157001 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c9b47fdf7-ztl8b_4ffadf60-9eff-4bf9-b0bd-9480cbd0d917/neutron-httpd/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.413625 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2f9be604-c179-43ac-b565-428652071d6e/nova-api-log/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.592489 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2f9be604-c179-43ac-b565-428652071d6e/nova-api-api/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.679328 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aaf1fd30-6ac7-4418-93f7-cf24adacd921/nova-cell0-conductor-conductor/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.774299 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f2ce7d34-ee24-4fc3-8cad-1ca78c6e1d51/nova-cell1-conductor-conductor/0.log" Nov 24 17:29:29 crc kubenswrapper[4768]: I1124 17:29:29.931390 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0f182adb-6256-41d3-b7f0-bfa5e16965f7/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.069824 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_972962b2-f34e-4ad2-825e-2be316ce2ec3/nova-metadata-log/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.425724 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/mysql-bootstrap/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.436290 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_589aaf7d-1ce5-4a36-9501-b91900237cb4/nova-scheduler-scheduler/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.687718 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/mysql-bootstrap/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.707321 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5fda78c-6764-4dfb-837a-b9e48ff5bea8/galera/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.740527 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_972962b2-f34e-4ad2-825e-2be316ce2ec3/nova-metadata-metadata/0.log" Nov 24 17:29:30 crc kubenswrapper[4768]: I1124 17:29:30.881157 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/mysql-bootstrap/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.051503 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/mysql-bootstrap/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.126321 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d0c08ff-07c5-42d9-bbd4-77169f98868a/galera/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.181107 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7541e37b-3221-4158-8d66-4682a77e8172/openstackclient/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.370107 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8j94t_df42583a-33cf-4b89-9f69-7f3baeb6e7b5/ovn-controller/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.481031 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4crs9_d668018c-aa61-4c17-9af6-f00933b4160c/openstack-network-exporter/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.671892 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server-init/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.815080 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server-init/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.875021 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovsdb-server/0.log" Nov 24 17:29:31 crc kubenswrapper[4768]: I1124 17:29:31.889499 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zpbbq_40425fc1-a61b-4da7-95a4-262b16a8020f/ovs-vswitchd/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.235731 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_84f66fa0-19d0-40f2-a4d0-4ddc58101d00/ovn-northd/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.236040 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_84f66fa0-19d0-40f2-a4d0-4ddc58101d00/openstack-network-exporter/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.425661 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4ae8da1-9449-46bf-8e88-fc42708e6c53/openstack-network-exporter/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.452629 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4ae8da1-9449-46bf-8e88-fc42708e6c53/ovsdbserver-nb/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.607874 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_841a709e-ced3-499f-b13e-d0e1ff90ad11/openstack-network-exporter/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.661305 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_841a709e-ced3-499f-b13e-d0e1ff90ad11/ovsdbserver-sb/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.736393 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f546f576-kqv27_244d26f2-3748-48ba-ab9f-ba52e5ad5729/placement-api/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.923862 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/setup-container/0.log" Nov 24 17:29:32 crc kubenswrapper[4768]: I1124 17:29:32.957014 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-58f546f576-kqv27_244d26f2-3748-48ba-ab9f-ba52e5ad5729/placement-log/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.159596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/setup-container/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.218275 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cf5db907-56c6-4254-8a98-0a6750fd0a07/rabbitmq/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.234907 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/setup-container/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.385926 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/setup-container/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.468654 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b1be76b0-164b-4bd7-950a-38e512cb4d5a/rabbitmq/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.507469 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68997d6dc7-xqk74_6faf5c89-9071-4710-bf7a-91f8b276370b/proxy-httpd/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.602197 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68997d6dc7-xqk74_6faf5c89-9071-4710-bf7a-91f8b276370b/proxy-server/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.708478 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ks8ts_b46e54e9-1ffb-4094-a42a-0d7a86fff17c/swift-ring-rebalance/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.898557 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-auditor/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.900086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-reaper/0.log" Nov 24 17:29:33 crc kubenswrapper[4768]: I1124 17:29:33.926246 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-replicator/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.021939 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/account-server/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.079000 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-replicator/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.127746 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-auditor/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.159722 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-server/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.181916 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/container-updater/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.287988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-auditor/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.296715 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-expirer/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.358290 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-replicator/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.388729 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-server/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.492605 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/rsync/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.501086 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/object-updater/0.log" Nov 24 17:29:34 crc kubenswrapper[4768]: I1124 17:29:34.570830 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1b76679b-41cc-4ddf-898b-5a05b5cfa052/swift-recon-cron/0.log" Nov 24 17:29:38 crc kubenswrapper[4768]: I1124 17:29:38.635841 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bfe18146-b6db-422b-965f-8b22d4943e4f/memcached/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.608426 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.759010 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.770029 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.776933 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.945231 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/pull/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.955127 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/extract/0.log" Nov 24 17:29:54 crc kubenswrapper[4768]: I1124 17:29:54.972544 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3ffb6c364ebc6520fe913cfb453c63093fdfa9646d70131f82633c6a629vwrz_3986c16f-d992-4d26-9f12-0892ffc031d6/util/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.097013 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4xg49_61f1ba78-cd9d-4202-9463-f7a4c5cc9092/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.189173 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-gnzjb_1a60eac6-e17c-4621-9367-3d1b60aab811/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.193803 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-4xg49_61f1ba78-cd9d-4202-9463-f7a4c5cc9092/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.312015 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-gnzjb_1a60eac6-e17c-4621-9367-3d1b60aab811/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.382537 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jdszs_f5b8ba2f-084a-4285-938b-5ffe669a9250/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.395240 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-jdszs_f5b8ba2f-084a-4285-938b-5ffe669a9250/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.576493 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-fxzrc_db716c0e-bc96-4eaa-af75-184cd71e8124/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.614701 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-fxzrc_db716c0e-bc96-4eaa-af75-184cd71e8124/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.720484 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-6smrr_d35343f5-188c-4787-9002-125c9e597e80/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.744062 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-6smrr_d35343f5-188c-4787-9002-125c9e597e80/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.789983 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jfk9g_f7e72195-5597-498f-906e-573b0c5c8295/kube-rbac-proxy/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.890488 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jfk9g_f7e72195-5597-498f-906e-573b0c5c8295/manager/0.log" Nov 24 17:29:55 crc kubenswrapper[4768]: I1124 17:29:55.939215 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-d9crw_e2835f06-b5ce-4170-a4c3-4a08e9cc2815/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.097035 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-d9crw_e2835f06-b5ce-4170-a4c3-4a08e9cc2815/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.141023 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58fc45656d-mlqr9_cdfcbb97-9f2e-40ab-863a-93e592ee728a/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.210168 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-58fc45656d-mlqr9_cdfcbb97-9f2e-40ab-863a-93e592ee728a/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.332315 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zsr4q_8eff7b8e-21b1-4d9f-ac7b-bc44593394c1/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.386263 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-zsr4q_8eff7b8e-21b1-4d9f-ac7b-bc44593394c1/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.520775 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-dsjtl_a718e502-d0e6-45ee-8a65-88de1381da04/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.521698 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-dsjtl_a718e502-d0e6-45ee-8a65-88de1381da04/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.653893 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xv4wf_a8b9e845-7f76-4609-aef9-89d1a16c971b/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.707045 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xv4wf_a8b9e845-7f76-4609-aef9-89d1a16c971b/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.762227 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-9x7r8_2f3138aa-0515-46f5-b897-191356f55fa4/kube-rbac-proxy/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.884596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-9x7r8_2f3138aa-0515-46f5-b897-191356f55fa4/manager/0.log" Nov 24 17:29:56 crc kubenswrapper[4768]: I1124 17:29:56.965399 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-9sgvb_18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d/kube-rbac-proxy/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.001971 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-9sgvb_18e53c5a-b1dd-4f0e-9bf6-8d97954d9d5d/manager/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.083425 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6nh25_8badbdc1-a611-4ada-821a-daade496a649/kube-rbac-proxy/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.158392 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6nh25_8badbdc1-a611-4ada-821a-daade496a649/manager/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.189718 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g_2020ac4a-5a4a-4c38-b667-5432dbf3d891/kube-rbac-proxy/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.279519 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-r8v7g_2020ac4a-5a4a-4c38-b667-5432dbf3d891/manager/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.645486 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-777rr_87ecddb5-623c-40cb-ba80-c869cea78856/registry-server/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.657630 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-849cb45cff-pvcvk_24c6b375-70f7-4954-9f65-4e3dcf12de68/operator/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.835017 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-v2hfk_f7c09f33-05d7-4251-930c-43d381f7f662/kube-rbac-proxy/0.log" Nov 24 17:29:57 crc kubenswrapper[4768]: I1124 17:29:57.942435 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-v2hfk_f7c09f33-05d7-4251-930c-43d381f7f662/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.165935 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-qnqvs_e2f173d4-03f8-44b0-b05f-3dfd845569e8/kube-rbac-proxy/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.204580 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-qnqvs_e2f173d4-03f8-44b0-b05f-3dfd845569e8/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.260915 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-56fcd5b457-nhnr6_920f3653-2dc6-4999-81c4-05248ca44d07/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.361292 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-ttgkz_9657d373-da37-4ca2-b8fe-7827bc37706f/operator/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.400911 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-x24f2_27ed9b45-b076-4104-a661-bc231021ae5b/kube-rbac-proxy/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.451564 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-x24f2_27ed9b45-b076-4104-a661-bc231021ae5b/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.529680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hvvsp_5b5647ed-7d14-4366-af99-d6d48ec2f033/kube-rbac-proxy/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.627041 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-hvvsp_5b5647ed-7d14-4366-af99-d6d48ec2f033/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.669596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-gtc95_d40b5804-6340-4be6-8da4-dca19827c8ee/kube-rbac-proxy/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.737556 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-gtc95_d40b5804-6340-4be6-8da4-dca19827c8ee/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.829704 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b2r7j_f5471d19-b623-4aa2-9a14-56d05fe236f8/manager/0.log" Nov 24 17:29:58 crc kubenswrapper[4768]: I1124 17:29:58.837016 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b2r7j_f5471d19-b623-4aa2-9a14-56d05fe236f8/kube-rbac-proxy/0.log" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.141252 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4"] Nov 24 17:30:00 crc kubenswrapper[4768]: E1124 17:30:00.141934 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d928b942-3a81-4607-8d34-600ba1825bbc" containerName="container-00" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.141946 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d928b942-3a81-4607-8d34-600ba1825bbc" containerName="container-00" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.142130 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d928b942-3a81-4607-8d34-600ba1825bbc" containerName="container-00" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.142768 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.146742 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.146990 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.150854 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4"] Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.263661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.263782 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.263959 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55fnj\" (UniqueName: \"kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.365836 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55fnj\" (UniqueName: \"kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.365943 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.365988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.366863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.371486 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.380615 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55fnj\" (UniqueName: \"kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj\") pod \"collect-profiles-29400090-b6ks4\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.467455 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:00 crc kubenswrapper[4768]: I1124 17:30:00.934807 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4"] Nov 24 17:30:01 crc kubenswrapper[4768]: I1124 17:30:01.465200 4768 generic.go:334] "Generic (PLEG): container finished" podID="c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3" containerID="d48cb062e4a30028ed92d4d28d86d8e4337b128b0a3762398b251ac7b6a33010" exitCode=0 Nov 24 17:30:01 crc kubenswrapper[4768]: I1124 17:30:01.465264 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" event={"ID":"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3","Type":"ContainerDied","Data":"d48cb062e4a30028ed92d4d28d86d8e4337b128b0a3762398b251ac7b6a33010"} Nov 24 17:30:01 crc kubenswrapper[4768]: I1124 17:30:01.465717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" event={"ID":"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3","Type":"ContainerStarted","Data":"d71f5c17a5d384c138dc21ffee1b482af68aa33e89376bee3d8c98c1202093c8"} Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.796296 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.818740 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume\") pod \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.818859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume\") pod \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.818915 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55fnj\" (UniqueName: \"kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj\") pod \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\" (UID: \"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3\") " Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.819582 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume" (OuterVolumeSpecName: "config-volume") pod "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3" (UID: "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.825454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3" (UID: "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.825758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj" (OuterVolumeSpecName: "kube-api-access-55fnj") pod "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3" (UID: "c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3"). InnerVolumeSpecName "kube-api-access-55fnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.920800 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55fnj\" (UniqueName: \"kubernetes.io/projected/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-kube-api-access-55fnj\") on node \"crc\" DevicePath \"\"" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.920836 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:30:02 crc kubenswrapper[4768]: I1124 17:30:02.920846 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 17:30:03 crc kubenswrapper[4768]: I1124 17:30:03.483214 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" event={"ID":"c32fbae6-50c6-4b5a-8f24-c5b7bb7a7ea3","Type":"ContainerDied","Data":"d71f5c17a5d384c138dc21ffee1b482af68aa33e89376bee3d8c98c1202093c8"} Nov 24 17:30:03 crc kubenswrapper[4768]: I1124 17:30:03.483556 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d71f5c17a5d384c138dc21ffee1b482af68aa33e89376bee3d8c98c1202093c8" Nov 24 17:30:03 crc kubenswrapper[4768]: I1124 17:30:03.483242 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400090-b6ks4" Nov 24 17:30:03 crc kubenswrapper[4768]: I1124 17:30:03.868318 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8"] Nov 24 17:30:03 crc kubenswrapper[4768]: I1124 17:30:03.877723 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400045-ttsf8"] Nov 24 17:30:05 crc kubenswrapper[4768]: I1124 17:30:05.603652 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34d6a2c2-3620-4dd5-a7fd-a160030b3c7d" path="/var/lib/kubelet/pods/34d6a2c2-3620-4dd5-a7fd-a160030b3c7d/volumes" Nov 24 17:30:14 crc kubenswrapper[4768]: I1124 17:30:14.363655 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dn5t9_622eb95d-1893-421b-890b-0fbd87dfa0b2/control-plane-machine-set-operator/0.log" Nov 24 17:30:14 crc kubenswrapper[4768]: I1124 17:30:14.526812 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mbvp9_063d4b06-d385-4749-8394-14041350b8e9/kube-rbac-proxy/0.log" Nov 24 17:30:14 crc kubenswrapper[4768]: I1124 17:30:14.543732 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mbvp9_063d4b06-d385-4749-8394-14041350b8e9/machine-api-operator/0.log" Nov 24 17:30:26 crc kubenswrapper[4768]: I1124 17:30:26.248542 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hbcgj_406ba9bc-fe9f-4e90-be27-c7947c0049cd/cert-manager-controller/0.log" Nov 24 17:30:26 crc kubenswrapper[4768]: I1124 17:30:26.449596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vcfg5_553c5463-b1f6-410c-a1d6-032a7c57d30c/cert-manager-cainjector/0.log" Nov 24 17:30:26 crc kubenswrapper[4768]: I1124 17:30:26.500669 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-kd7w2_6e2775e6-ea84-4b7d-a5e7-0ddc4b3d174b/cert-manager-webhook/0.log" Nov 24 17:30:28 crc kubenswrapper[4768]: I1124 17:30:28.650427 4768 scope.go:117] "RemoveContainer" containerID="769e2692ea50cf6b0edcb7b7e7c91ed8a8c3484a19c12451b191f27cf6e7fb35" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.305105 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-6zplx_8f14df85-542b-433f-a661-79f1707a03ad/nmstate-console-plugin/0.log" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.477680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-5hkm5_5fd414a4-49e9-44b7-8207-e4edb7887dba/kube-rbac-proxy/0.log" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.500930 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p2zsh_75419742-7b67-4c11-9d45-2db75c1d8342/nmstate-handler/0.log" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.569841 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-5hkm5_5fd414a4-49e9-44b7-8207-e4edb7887dba/nmstate-metrics/0.log" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.720163 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-pqvp5_0149c8b7-22b3-4d9d-8bb1-6b8725c3e85b/nmstate-operator/0.log" Nov 24 17:30:38 crc kubenswrapper[4768]: I1124 17:30:38.763806 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rht2n_3755f3c6-8022-4edb-8efe-b858b58cf052/nmstate-webhook/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.466040 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-6rm47_7ec0e305-1a0c-449b-8c6c-9f5930582193/kube-rbac-proxy/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.512101 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-6rm47_7ec0e305-1a0c-449b-8c6c-9f5930582193/controller/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.596958 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-2szdn_98a7049b-d1ef-41d1-aa13-62bc2f1657ea/frr-k8s-webhook-server/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.700488 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.902981 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.908690 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.909400 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:30:52 crc kubenswrapper[4768]: I1124 17:30:52.913418 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.077606 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.096327 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.100778 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.112546 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.286720 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-metrics/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.288837 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-frr-files/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.293970 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/cp-reloader/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.302438 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/controller/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.452789 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/frr-metrics/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.472097 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/kube-rbac-proxy/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.477634 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/kube-rbac-proxy-frr/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.665004 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/reloader/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.734279 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cc97d846-2sqgw_351e35d8-541a-43c5-b07d-affa44d1c013/manager/0.log" Nov 24 17:30:53 crc kubenswrapper[4768]: I1124 17:30:53.876981 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f6bc667bb-56fwx_ca825a3d-d8e1-45ce-af38-6874f0b3c498/webhook-server/0.log" Nov 24 17:30:54 crc kubenswrapper[4768]: I1124 17:30:54.130012 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9m6sf_6bd76705-44df-4419-a1d4-e294b3d010fd/kube-rbac-proxy/0.log" Nov 24 17:30:54 crc kubenswrapper[4768]: I1124 17:30:54.452970 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9m6sf_6bd76705-44df-4419-a1d4-e294b3d010fd/speaker/0.log" Nov 24 17:30:54 crc kubenswrapper[4768]: I1124 17:30:54.640796 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xt7mv_7a21efe0-4145-43ac-9e98-31fecbc074d5/frr/0.log" Nov 24 17:31:04 crc kubenswrapper[4768]: I1124 17:31:04.892827 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:31:04 crc kubenswrapper[4768]: I1124 17:31:04.893580 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.582844 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.780186 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.817303 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.846843 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.980732 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/extract/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.981538 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/pull/0.log" Nov 24 17:31:05 crc kubenswrapper[4768]: I1124 17:31:05.985310 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e9g2hp_30c009ab-380d-4bc7-a771-61d41ad10d35/util/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.179388 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.368222 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.390607 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.415878 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.578464 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-utilities/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.587591 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/extract-content/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.779184 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:31:06 crc kubenswrapper[4768]: I1124 17:31:06.866534 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fz5jq_eca00397-85e6-401b-b0a8-011a3307b0ee/registry-server/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.019795 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.027852 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.064463 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.277808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-utilities/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.319913 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/extract-content/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.528732 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.680748 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.703866 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dsv6c_41aba52e-e435-4061-88d5-30b6d8b78806/registry-server/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.734814 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.771430 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.910831 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/util/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.914331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/pull/0.log" Nov 24 17:31:07 crc kubenswrapper[4768]: I1124 17:31:07.954142 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6bg88t_f04516d3-2027-43f2-975d-294f284a7a36/extract/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.082018 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-5zvk7_453d22cb-b151-4afd-8116-28d85514ca2c/marketplace-operator/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.124028 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.317068 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.331455 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.335700 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.485061 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-utilities/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.505251 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/extract-content/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.585477 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-47fqh_9e71cc43-12fc-4315-992f-af825fe58680/registry-server/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.667785 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.847189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.855266 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:31:08 crc kubenswrapper[4768]: I1124 17:31:08.867055 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:31:09 crc kubenswrapper[4768]: I1124 17:31:09.031439 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-utilities/0.log" Nov 24 17:31:09 crc kubenswrapper[4768]: I1124 17:31:09.042580 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/extract-content/0.log" Nov 24 17:31:09 crc kubenswrapper[4768]: I1124 17:31:09.309968 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4qjbs_3ec76654-6209-40eb-85dc-861ddae3c79f/registry-server/0.log" Nov 24 17:31:32 crc kubenswrapper[4768]: E1124 17:31:32.290325 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.58:54032->38.102.83.58:40487: write tcp 38.102.83.58:54032->38.102.83.58:40487: write: broken pipe Nov 24 17:31:34 crc kubenswrapper[4768]: I1124 17:31:34.892628 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:31:34 crc kubenswrapper[4768]: I1124 17:31:34.892955 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:32:04 crc kubenswrapper[4768]: I1124 17:32:04.892769 4768 patch_prober.go:28] interesting pod/machine-config-daemon-jf255 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 17:32:04 crc kubenswrapper[4768]: I1124 17:32:04.893335 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 17:32:04 crc kubenswrapper[4768]: I1124 17:32:04.893399 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jf255" Nov 24 17:32:04 crc kubenswrapper[4768]: I1124 17:32:04.894071 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe"} pod="openshift-machine-config-operator/machine-config-daemon-jf255" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 17:32:04 crc kubenswrapper[4768]: I1124 17:32:04.894115 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" containerName="machine-config-daemon" containerID="cri-o://24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" gracePeriod=600 Nov 24 17:32:05 crc kubenswrapper[4768]: E1124 17:32:05.017648 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:32:05 crc kubenswrapper[4768]: I1124 17:32:05.530596 4768 generic.go:334] "Generic (PLEG): container finished" podID="517d8128-bef5-40a3-a786-5010780c2a58" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" exitCode=0 Nov 24 17:32:05 crc kubenswrapper[4768]: I1124 17:32:05.530673 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jf255" event={"ID":"517d8128-bef5-40a3-a786-5010780c2a58","Type":"ContainerDied","Data":"24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe"} Nov 24 17:32:05 crc kubenswrapper[4768]: I1124 17:32:05.530725 4768 scope.go:117] "RemoveContainer" containerID="5d13370034de2225dc19449060f182bae1bf4a76aba56f95b931132dc577bda6" Nov 24 17:32:05 crc kubenswrapper[4768]: I1124 17:32:05.531750 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:32:05 crc kubenswrapper[4768]: E1124 17:32:05.532206 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:32:17 crc kubenswrapper[4768]: I1124 17:32:17.581740 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:32:17 crc kubenswrapper[4768]: E1124 17:32:17.582567 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:32:32 crc kubenswrapper[4768]: I1124 17:32:32.580554 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:32:32 crc kubenswrapper[4768]: E1124 17:32:32.581434 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:32:39 crc kubenswrapper[4768]: I1124 17:32:39.850954 4768 generic.go:334] "Generic (PLEG): container finished" podID="fb445038-f451-4347-8f74-15048f7cfb4b" containerID="2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379" exitCode=0 Nov 24 17:32:39 crc kubenswrapper[4768]: I1124 17:32:39.851149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" event={"ID":"fb445038-f451-4347-8f74-15048f7cfb4b","Type":"ContainerDied","Data":"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379"} Nov 24 17:32:39 crc kubenswrapper[4768]: I1124 17:32:39.853189 4768 scope.go:117] "RemoveContainer" containerID="2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379" Nov 24 17:32:40 crc kubenswrapper[4768]: I1124 17:32:40.406890 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmfkt_must-gather-k8kt2_fb445038-f451-4347-8f74-15048f7cfb4b/gather/0.log" Nov 24 17:32:45 crc kubenswrapper[4768]: I1124 17:32:45.582577 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:32:45 crc kubenswrapper[4768]: E1124 17:32:45.583745 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.008832 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kmfkt/must-gather-k8kt2"] Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.009633 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" podUID="fb445038-f451-4347-8f74-15048f7cfb4b" containerName="copy" containerID="cri-o://31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314" gracePeriod=2 Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.022007 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kmfkt/must-gather-k8kt2"] Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.577743 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmfkt_must-gather-k8kt2_fb445038-f451-4347-8f74-15048f7cfb4b/copy/0.log" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.578549 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.719414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dt27\" (UniqueName: \"kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27\") pod \"fb445038-f451-4347-8f74-15048f7cfb4b\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.719821 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output\") pod \"fb445038-f451-4347-8f74-15048f7cfb4b\" (UID: \"fb445038-f451-4347-8f74-15048f7cfb4b\") " Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.725802 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27" (OuterVolumeSpecName: "kube-api-access-2dt27") pod "fb445038-f451-4347-8f74-15048f7cfb4b" (UID: "fb445038-f451-4347-8f74-15048f7cfb4b"). InnerVolumeSpecName "kube-api-access-2dt27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.822890 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dt27\" (UniqueName: \"kubernetes.io/projected/fb445038-f451-4347-8f74-15048f7cfb4b-kube-api-access-2dt27\") on node \"crc\" DevicePath \"\"" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.868840 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fb445038-f451-4347-8f74-15048f7cfb4b" (UID: "fb445038-f451-4347-8f74-15048f7cfb4b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.924926 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb445038-f451-4347-8f74-15048f7cfb4b-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.961902 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kmfkt_must-gather-k8kt2_fb445038-f451-4347-8f74-15048f7cfb4b/copy/0.log" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.962758 4768 generic.go:334] "Generic (PLEG): container finished" podID="fb445038-f451-4347-8f74-15048f7cfb4b" containerID="31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314" exitCode=143 Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.962850 4768 scope.go:117] "RemoveContainer" containerID="31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.963020 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kmfkt/must-gather-k8kt2" Nov 24 17:32:51 crc kubenswrapper[4768]: I1124 17:32:51.995858 4768 scope.go:117] "RemoveContainer" containerID="2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379" Nov 24 17:32:52 crc kubenswrapper[4768]: I1124 17:32:52.078150 4768 scope.go:117] "RemoveContainer" containerID="31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314" Nov 24 17:32:52 crc kubenswrapper[4768]: E1124 17:32:52.079135 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314\": container with ID starting with 31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314 not found: ID does not exist" containerID="31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314" Nov 24 17:32:52 crc kubenswrapper[4768]: I1124 17:32:52.079177 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314"} err="failed to get container status \"31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314\": rpc error: code = NotFound desc = could not find container \"31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314\": container with ID starting with 31e79c3c683161a72d9b357d8bf9552fc4d0c179c2bfd24e0fbd510a2324b314 not found: ID does not exist" Nov 24 17:32:52 crc kubenswrapper[4768]: I1124 17:32:52.079203 4768 scope.go:117] "RemoveContainer" containerID="2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379" Nov 24 17:32:52 crc kubenswrapper[4768]: E1124 17:32:52.080945 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379\": container with ID starting with 2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379 not found: ID does not exist" containerID="2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379" Nov 24 17:32:52 crc kubenswrapper[4768]: I1124 17:32:52.080994 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379"} err="failed to get container status \"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379\": rpc error: code = NotFound desc = could not find container \"2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379\": container with ID starting with 2a198039fc014cafe2a3e9569511f04b5505d1ff5c406db9a6f6edc9d790f379 not found: ID does not exist" Nov 24 17:32:53 crc kubenswrapper[4768]: I1124 17:32:53.594340 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb445038-f451-4347-8f74-15048f7cfb4b" path="/var/lib/kubelet/pods/fb445038-f451-4347-8f74-15048f7cfb4b/volumes" Nov 24 17:33:00 crc kubenswrapper[4768]: I1124 17:33:00.581279 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:33:00 crc kubenswrapper[4768]: E1124 17:33:00.582154 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:33:12 crc kubenswrapper[4768]: I1124 17:33:12.580217 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:33:12 crc kubenswrapper[4768]: E1124 17:33:12.580991 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:33:27 crc kubenswrapper[4768]: I1124 17:33:27.580975 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:33:27 crc kubenswrapper[4768]: E1124 17:33:27.581612 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:33:42 crc kubenswrapper[4768]: I1124 17:33:42.581333 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:33:42 crc kubenswrapper[4768]: E1124 17:33:42.582411 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:33:55 crc kubenswrapper[4768]: I1124 17:33:55.581155 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:33:55 crc kubenswrapper[4768]: E1124 17:33:55.583419 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:34:09 crc kubenswrapper[4768]: I1124 17:34:09.589688 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:34:09 crc kubenswrapper[4768]: E1124 17:34:09.590420 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:34:24 crc kubenswrapper[4768]: I1124 17:34:24.581230 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:34:24 crc kubenswrapper[4768]: E1124 17:34:24.582108 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:34:28 crc kubenswrapper[4768]: I1124 17:34:28.814622 4768 scope.go:117] "RemoveContainer" containerID="a31b4d64b371cdba6790f9d4d2b6d8b9dca30df8d7892cc04ba30e697e825e91" Nov 24 17:34:37 crc kubenswrapper[4768]: I1124 17:34:37.581860 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:34:37 crc kubenswrapper[4768]: E1124 17:34:37.582917 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:34:49 crc kubenswrapper[4768]: I1124 17:34:49.592476 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:34:49 crc kubenswrapper[4768]: E1124 17:34:49.593204 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:35:04 crc kubenswrapper[4768]: I1124 17:35:04.580901 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:35:04 crc kubenswrapper[4768]: E1124 17:35:04.581858 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:35:16 crc kubenswrapper[4768]: I1124 17:35:16.581314 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:35:16 crc kubenswrapper[4768]: E1124 17:35:16.582091 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:35:28 crc kubenswrapper[4768]: I1124 17:35:28.581248 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:35:28 crc kubenswrapper[4768]: E1124 17:35:28.582030 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:35:43 crc kubenswrapper[4768]: I1124 17:35:43.582868 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:35:43 crc kubenswrapper[4768]: E1124 17:35:43.583576 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:35:55 crc kubenswrapper[4768]: I1124 17:35:55.581072 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:35:55 crc kubenswrapper[4768]: E1124 17:35:55.581749 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58" Nov 24 17:36:08 crc kubenswrapper[4768]: I1124 17:36:08.581257 4768 scope.go:117] "RemoveContainer" containerID="24537f41eea72b10421e31ce893aaee0f8a3a4078e28246b2424768aaf54b8fe" Nov 24 17:36:08 crc kubenswrapper[4768]: E1124 17:36:08.583075 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jf255_openshift-machine-config-operator(517d8128-bef5-40a3-a786-5010780c2a58)\"" pod="openshift-machine-config-operator/machine-config-daemon-jf255" podUID="517d8128-bef5-40a3-a786-5010780c2a58"